2015 International Joint Conference on Neural Networks (IJCNN) 2015
DOI: 10.1109/ijcnn.2015.7280658
|View full text |Cite
|
Sign up to set email alerts
|

Gradient-descent-based learning in memristive crossbar arrays

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
15
0

Year Published

2017
2017
2022
2022

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 25 publications
(16 citation statements)
references
References 18 publications
1
15
0
Order By: Relevance
“…The key issue addressed in our work is how to use the plasticity effects in synapses represented by memristors with multiple resistive states to locally implement the learning rule. The main distinction between our results and the related studies [25] is that we have implemented the mechanism, which is able to propagate error backwards and is needed for multi-layered networks. This is important for the deep learning schemes.…”
Section: Discussionmentioning
confidence: 90%
See 1 more Smart Citation
“…The key issue addressed in our work is how to use the plasticity effects in synapses represented by memristors with multiple resistive states to locally implement the learning rule. The main distinction between our results and the related studies [25] is that we have implemented the mechanism, which is able to propagate error backwards and is needed for multi-layered networks. This is important for the deep learning schemes.…”
Section: Discussionmentioning
confidence: 90%
“…This is important for the deep learning schemes. Note that in [25] the single layer perceptron is considered, and the method proposed in that work cannot be used to propagate the error between the layers. The implementation of the learning rule requires the conversion of signals x i and δ j at opposite electrodes of a memristor in a crossbar to the voltage drop across the crossbar that would change the memristor conductivity proportional to the product x i × δ j .…”
Section: Discussionmentioning
confidence: 99%
“…Furthermore, it will be much more difficult if the electronic synapse devices have nonlinear conductance responses. In other words, the use of multiple pulses for precise weight updates is impractical in actual electronic devices (Nair and Dudek, 2015). Therefore, we propose a weight-updating method based on a BP algorithm for HW-DNNs in which the amount of the weight change (equivalently, the learning rate in SW-DNNs) is determined by the amount of the conductance change of the electronic synapse devices.…”
Section: Weight-updating Methodsmentioning
confidence: 99%
“…Recently, several types of emerging electronic synapse devices such as phase change memory (PCRAM) (Suri et al, 2011;Wright et al, 2011;Kuzum et al, 2012), resistive change memory (RRAM) (Jo et al, 2010;Ohno et al, 2011;Wu et al, 2012;Yu et al, 2012), ferroelectric devices (Chanthbouala et al, 2012), and FET-based devices (Diorio et al, 1996;Ziegler et al, 2012;Kim et al, 2016) have been proposed to mimic biological synapses. Although most of these works have focused on spike-timingdependent-plasticity (STDP, (Bi and Poo, 1998)) learning algorithm (Kuzum et al, 2013), the learning performance using STDP is still in its early stage (Burr et al, 2014;Nair and Dudek, 2015). Unlike the approach in which STDP is used, electronic synapse devices (Burr et al, 2014;Prezioso et al, 2015;Merrikh-Bayat et al, 2015) can also be applied to deep neural networks (DNNs) with well-studied back-propagation (BP) algorithms (Rumelhart et al, 1986).…”
Section: Introductionmentioning
confidence: 99%
“…BL is based on the Random Vector Functional-link Neural Network (RVFLNN) previously proposed in [32]. Instead of gradient-descent-based learning algorithms [33], RVFLNN provides the generalization capability as a function approximation by calculating the pseudoinverse to find the desired connection weights. However, RVLNN does not work well in the modern large data era.…”
Section: Bl Algorithmmentioning
confidence: 99%