2016
DOI: 10.1038/ncomms13276
|View full text |Cite
|
Sign up to set email alerts
|

Random synaptic feedback weights support error backpropagation for deep learning

Abstract: The brain processes information through multiple layers of neurons. This deep architecture is representationally powerful, but complicates learning because it is difficult to identify the responsible neurons when a mistake is made. In machine learning, the backpropagation algorithm assigns blame by multiplying error signals with all the synaptic weights on each neuron's axon and further downstream. However, this involves a precise, symmetric backward connectivity pattern, which is thought to be impossible in t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

14
685
1

Year Published

2016
2016
2023
2023

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 588 publications
(736 citation statements)
references
References 52 publications
14
685
1
Order By: Relevance
“…Some of the criticisms include the use in backpropagation of symmetrical weight for the forward inference and backward error propagation phase, the relative paucity of supervised signals and the clear and strong unsupervised basis of much learning. Recent research has shown that the symmetrical weight requirement is not a specific requirement (Lillicrap, Cownden, Tweed, & Akerman, 2016). Roelfsema and van Ooyen (2005) already showed that an activation feedback combined with a broadly distributed, dopamine-like errordifference signal can on average learn error-backpropagation in a reinforcement learning setting.…”
Section: 1mentioning
confidence: 99%
“…Some of the criticisms include the use in backpropagation of symmetrical weight for the forward inference and backward error propagation phase, the relative paucity of supervised signals and the clear and strong unsupervised basis of much learning. Recent research has shown that the symmetrical weight requirement is not a specific requirement (Lillicrap, Cownden, Tweed, & Akerman, 2016). Roelfsema and van Ooyen (2005) already showed that an activation feedback combined with a broadly distributed, dopamine-like errordifference signal can on average learn error-backpropagation in a reinforcement learning setting.…”
Section: 1mentioning
confidence: 99%
“…Although, previous work (Lee et al, 2016;Lillicrap et al, 2016;O'Connor and Welling, 2016) overcomes some of the fundamental difficulties of gradient BP listed above in spiking networks, here we tackle all of the key difficulties using eventdriven random BP (eRBP), a synaptic plasticity rule for deep spiking neural networks achieving classification accuracies that are similar to those obtained in artificial neural networks, potentially running on a fraction of the energy budget with dedicated neuromorphic hardware.…”
Section: Introductionmentioning
confidence: 99%
“…Three are highlighted here, see Section 3.5, and especially Sections 3.6 and 3.7. Other algorithms where the Response graphs do not simply implement backprop include difference target propagation (Lee et al, 2015) and feedback alignment (Lillicrap et al, 2014) [both discussed briefly in Section 3.7] and truncated backpropagation through time (Elman, 1990;Williams and Peng, 1990;Williams and Zipser, 1995), where a choice is made about where to cut backprop short. Examples where the query and response graph differ are of particular interest, since they point toward more general classes of deep learning algorithms.…”
Section: Grammars For Gamesmentioning
confidence: 99%
“…Two recent alternatives to backprop that also do not rely on backpropagating exact gradients are target propagation (Lee et al, 2015) and feedback alignment (Lillicrap et al, 2014). Target propagation makes do without gradients by implementing autoencoders at each layer.…”
Section: R 4 (Biological Plausibility Of Kickback)mentioning
confidence: 99%