2019
DOI: 10.48550/arxiv.1905.13633
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Updates of Equilibrium Prop Match Gradients of Backprop Through Time in an RNN with Static Input

Abstract: Equilibrium Propagation (EP) is a biologically inspired learning algorithm for convergent recurrent neural networks, i.e. RNNs that are fed by a static input x and settle to a steady state. Training convergent RNNs consists in adjusting the weights until the steady state of output neurons coincides with a target y. Convergent RNNs can also be trained with the more conventional Backpropagation Through Time (BPTT) algorithm. In its original formulation EP was described in the case of real-time neuronal dynamics,… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
17
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
4
1

Relationship

3
2

Authors

Journals

citations
Cited by 6 publications
(17 citation statements)
references
References 10 publications
0
17
0
Order By: Relevance
“…The convolutional setting was also discussed in [25]. The primitive function they develop, analog of the energy function (equation 18 in [25]), however, is just a quadratic form. Dynamical equations defined as derivatives of that function are linear.…”
Section: Ham With Two Hidden Layers and Local Connectivitymentioning
confidence: 99%
“…The convolutional setting was also discussed in [25]. The primitive function they develop, analog of the energy function (equation 18 in [25]), however, is just a quadratic form. Dynamical equations defined as derivatives of that function are linear.…”
Section: Ham With Two Hidden Layers and Local Connectivitymentioning
confidence: 99%
“…Such architectures come with their own technical difficulties. Because they rely on co-located memory and processing resources, standard optimization algorithms can prove impractical, although novel optimization procedures have recently been put forward to overcome this obstacle by meeting their peculiar hardware design [77][78][79][80][81].…”
Section: Introductionmentioning
confidence: 99%
“…First, the network relaxes to a steady state, then the output layer is nudged towards a ground-truth target until a second steady state is reached. During the second phase, the perturbation at the output propagates to upstream layers, creating local error signals that match exactly those computed by Backpropagation Through Time (BPTT) [Ernoult et al, 2019]. The spatial locality of the learning rule prescribed by EP is highly attractive for designing energy-efficient "neuromorphic" hardware implementations of gradient-based learning algorithms.…”
Section: Introductionmentioning
confidence: 99%
“…However, previous works on EP [Scellier and Bengio, 2017, O'Connor et al, 2018, O'Connor et al, 2019, Ernoult et al, 2019 limited their experiments to the MNIST classification task and to shallow network architectures. Despite the theoretical guarantees of EP, the literature suggests that no implementation of EP has thus far succeeded to match the performance of standard deep learning approaches to train deep networks on challenging visual tasks.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation