2015 International Joint Conference on Neural Networks (IJCNN) 2015
DOI: 10.1109/ijcnn.2015.7280785
|View full text |Cite
|
Sign up to set email alerts
|

Efficient training algorithms for neural networks based on memristive crossbar circuits

Abstract: We have adapted backpropagation algorithm for training multilayer perceptron classifier implemented with mem ristive crossbar circuits. The proposed training approach takes into account switching dynamics of a particular, though very typical, type of memristive devices and weight update restrictions imposed by crossbar topology. The simulation results show that for crossbar-based multilayer perceptron with one hidden layer of 300 neurons misclassification rate on MNIST benchmark could be as low as 1.47% and 4.… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

2
60
0

Year Published

2016
2016
2023
2023

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 80 publications
(62 citation statements)
references
References 36 publications
2
60
0
Order By: Relevance
“…In the case of online training of neural networks (via back propagation) in memristor arrays, it has been shown 46 that critical device issues include the programming bit precision (roughly 6 bits, or 64 conductance levels, are needed) and asymmetry in the ON versus OFF switching, since even a small asymmetry can degrade classification accuracy significantly. Other work 47 has shown that selectively optimizing the operating point for ON versus OFF switching, which can involve modifying the applied voltages and pulse widths, can produce acceptable results, but that the percentage of fully stuck ON or OFF devices can, in turn, lead to large errors. Nonetheless, many studies have shown that by retraining the network, in the presence of stuck ON or OFF cells, can almost fully compensate for the defects and regain the classification accuracy, even for up to 20% defects 48 .…”
Section: Nature Electronicsmentioning
confidence: 99%
“…In the case of online training of neural networks (via back propagation) in memristor arrays, it has been shown 46 that critical device issues include the programming bit precision (roughly 6 bits, or 64 conductance levels, are needed) and asymmetry in the ON versus OFF switching, since even a small asymmetry can degrade classification accuracy significantly. Other work 47 has shown that selectively optimizing the operating point for ON versus OFF switching, which can involve modifying the applied voltages and pulse widths, can produce acceptable results, but that the percentage of fully stuck ON or OFF devices can, in turn, lead to large errors. Nonetheless, many studies have shown that by retraining the network, in the presence of stuck ON or OFF cells, can almost fully compensate for the defects and regain the classification accuracy, even for up to 20% defects 48 .…”
Section: Nature Electronicsmentioning
confidence: 99%
“…To realize large scale learning tasks, MNNs can perform impressively well and produce state-of-the-art results when massive computational power is available [20,21]. Learning in multilayer neural networks (MNNs) relies on continuous updating of large matrices of synaptic weights by local rules [22,23]. The BP algorithm is a common algorithm in local learning, which is widely used in the training of MNNs.…”
Section: Mnn Conceptsmentioning
confidence: 99%
“…In this section, we give a short sketch of the back-propagation technique [25,23]. The actual output value of the neural network is denoted by y j and the ideal tag value is denoted by t j , and we can use the mean square error as an error function Figure 13.…”
Section: Mnn Algorithmmentioning
confidence: 99%
“…Here, V read = 0.5 V is the 'read' amplitude and β = 10 4 , which corresponds to similar structures [41]. In (22), the symbols c and w i denote the bias and the synaptic weights, respectively.…”
Section: Benchmark Circuitsmentioning
confidence: 99%