2020
DOI: 10.3389/fnins.2020.00424
|View full text |Cite
|
Sign up to set email alerts
|

Synaptic Plasticity Dynamics for Deep Continuous Local Learning (DECOLLE)

Abstract: A growing body of work underlines striking similarities between biological neural networks and recurrent, binary neural networks. A relatively smaller body of work, however, addresses the similarities between learning dynamics employed in deep artificial neural networks and synaptic plasticity in spiking neural networks. The challenge preventing this is largely caused by the discrepancy between the dynamical properties of synaptic plasticity and the requirements for gradient backpropagation. Learning algorithm… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
214
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
6
3

Relationship

1
8

Authors

Journals

citations
Cited by 204 publications
(214 citation statements)
references
References 32 publications
0
214
0
Order By: Relevance
“…Several methods for approximating stochastic gradient descent in feedforward networks of spiking neurons have been proposed, see e.g., refs. [40][41][42][43][44] . These employ-like e-prop-a pseudogradient to overcome the non-differentiability of a spiking neuron, as proposed previously in refs.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Several methods for approximating stochastic gradient descent in feedforward networks of spiking neurons have been proposed, see e.g., refs. [40][41][42][43][44] . These employ-like e-prop-a pseudogradient to overcome the non-differentiability of a spiking neuron, as proposed previously in refs.…”
Section: Discussionmentioning
confidence: 99%
“…45 , 46 . References 40 , 42 , 43 arrive at a synaptic plasticity rule for feedforward networks that consists—like e-prop—of the product of a learning signal and a derivative (eligibility trace) that describes the dependence of a spike of a neuron j on the weight of an afferent synapse W j i . But in a recurrent network, the spike output of j depends on W j i also indirectly, via loops in the network that allow that a spike of neuron j contributes to the firing of other neurons, which in turn affect the firing of the presynaptic neuron i .…”
Section: Discussionmentioning
confidence: 99%
“…Although ignoring H E seems like a drastic simplification, several studies have used it to construct biologically plausible online learning rules as local approximations of RTRL. Empirically, these rules perform well on a number of complex problems either without recurrent connections, such as in the case of SuperSpike [17], or by ignoring gradient flow through the recurrent synaptic connectivity as done in e-Prop [27], RFLO [92], and DECOLLE [19]. These findings suggest that explicit recurrence is either not necessary for many problems or that ignoring explicit recurrence in gradient computations does not create a major impediment for successful learning, even when such recurrent connections are present.…”
Section: Implicit Recurrence Induces Approximate Local and Efficmentioning
confidence: 99%
“…Thus, an important challenge in bridging neuroscience and machine learning is to understand how plasticity processes can utilize this evaluative feedback efficiently for learning. Interestingly, an increasing body of work demonstrates that approximate forms of gradient backpropagation compatible with biological neural networks naturally incorporate such feedback, and models trained with them achieve near state-of-the-art results on classical classification benchmarks [89][90][91] . Synaptic plasticity rules can be derived from gradient descent that lead to 'threefactor' rules, consistent with an error-modulated Hebbian learning (Fig.…”
Section: Artificial Connectionist Rl Agentsmentioning
confidence: 99%