2016
DOI: 10.1587/nolta.7.395
|View full text |Cite
|
Sign up to set email alerts
|

Robustness of hardware-oriented restricted Boltzmann machines in deep belief networks for reliable processing

Abstract: Remarkable hardware robustness of deep learning is revealed from an error-injection analysis performed using a custom hardware model implementing parallelized restricted Boltzmann machines (RBMs). RBMs used in deep belief networks (DBNs) demonstrate robustness against memory errors during and after learning. Fine-tuning has a significant impact on the recovery of accuracy under the presence of static errors that may modify structural data of RBMs. The proposed hardware networks with fine-graded memory distribu… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2017
2017
2020
2020

Publication Types

Select...
4
1

Relationship

2
3

Authors

Journals

citations
Cited by 5 publications
(4 citation statements)
references
References 19 publications
0
4
0
Order By: Relevance
“…1) The hardware implementation of RBMs has been studied, and scalable and highly parallel RBM microelectronic systems have been developed and analyzed. [4][5][6] In layer-by-layer RBM structures, the stochastic iterative learning efficiently reduces computation time, although all the learned data such as connection weights and node biases must be stored in memory and updated during learning. Figure 2(b) shows that the average number of computation steps is thereby reduced to half the default value in RBM learning with the MNIST digit dataset.…”
Section: Artificial Neurons With Sequential Synapse Operationmentioning
confidence: 99%
See 1 more Smart Citation
“…1) The hardware implementation of RBMs has been studied, and scalable and highly parallel RBM microelectronic systems have been developed and analyzed. [4][5][6] In layer-by-layer RBM structures, the stochastic iterative learning efficiently reduces computation time, although all the learned data such as connection weights and node biases must be stored in memory and updated during learning. Figure 2(b) shows that the average number of computation steps is thereby reduced to half the default value in RBM learning with the MNIST digit dataset.…”
Section: Artificial Neurons With Sequential Synapse Operationmentioning
confidence: 99%
“…A drawback of such high-performance systems is that they consume more power than the field-programmable gate array (FPGA) and application-specific integrated circuit implementations of DL. [3][4][5][6] The power area density of the chips limits the total computing acceleration. To satisfy the growing DL needs of data scientists, more efficient hardware systems are required.…”
Section: Introductionmentioning
confidence: 99%
“…There is much room for additional learning in the same hardware to recover from such data drift. [5][6][7] Before discussing NN inference circuits, we tried to simplify the learning sequence of the NN by focusing on backward connections between the hidden layer and the output layer. 26) A simple mechanism, called "random backpropagation NN," has been presented.…”
Section: Nn Simulations Of Analog Neurons With Resistive Synapsesmentioning
confidence: 99%
“…A drawback of such highperformance systems based on GP computers is that they are more power-hungry than the custom-made logic circuits in a field programmable gate array or application-specific integrated circuit. [4][5][6][7] To implement the intelligence of NNs on tiny systems such as recent Internet of Things (IoT) devices, the computation power must be reduced by using novel hardware architectures and calculation cores. Recently, binary connections have been applied to DNN computing in timedomain processing architecture to enable time-and energyefficient computing in a fully spatially unrolled architecture utilizing two-terminal memristive cells in a small circuit.…”
Section: Introductionmentioning
confidence: 99%