2017
DOI: 10.3389/fncom.2017.00024
|View full text |Cite
|
Sign up to set email alerts
|

Equilibrium Propagation: Bridging the Gap between Energy-Based Models and Backpropagation

Abstract: We introduce Equilibrium Propagation, a learning framework for energy-based models. It involves only one kind of neural computation, performed in both the first phase (when the prediction is made) and the second phase of training (after the target or prediction error is revealed). Although this algorithm computes the gradient of an objective function just like Backpropagation, it does not need a special computation or circuit for the second phase, where errors are implicitly propagated. Equilibrium Propagation… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

9
451
0
1

Year Published

2017
2017
2024
2024

Publication Types

Select...
4
2
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 325 publications
(485 citation statements)
references
References 27 publications
9
451
0
1
Order By: Relevance
“…Roelfsema and van Ooyen (2005) already showed that an activation feedback combined with a broadly distributed, dopamine-like errordifference signal can on average learn error-backpropagation in a reinforcement learning setting. Alternative learning schemes, like Equilibrium Propagation (Scellier & Bengio, 2017) have also been shown to approximate error-backpropagation while effectively implementing basic STDP rules.…”
Section: 1mentioning
confidence: 99%
“…Roelfsema and van Ooyen (2005) already showed that an activation feedback combined with a broadly distributed, dopamine-like errordifference signal can on average learn error-backpropagation in a reinforcement learning setting. Alternative learning schemes, like Equilibrium Propagation (Scellier & Bengio, 2017) have also been shown to approximate error-backpropagation while effectively implementing basic STDP rules.…”
Section: 1mentioning
confidence: 99%
“…DCNNs, however, are pitched at a high level of abstraction from perceptual cortex, which could lead one to doubt that they succeed at providing mechanistic explanations for even perceptual processing (though see Boone & Piccinini, 2016;Stinson, 2018). In particular, some key aspects of the Hubel and Wiesel's story which originally inspired DCNNs have been challenged on neuroanatomical grounds (especially regarding the purported dichotomy between simple and complex cells on which the cooperation between convolution and pooling was based- Priebe, Mechler, Carandini, & Ferster, 2004), and there remains significant debate as to whether backpropagation learning is biologically plausible (though more biologically-plausible learning algorithms have recently been explored by prominent deep learning modelers, especially involving randomized error signals and spike-timing dependent plasticity- Lillicrap, Cownden, Tweed, & Akerman, 2016;Scellier & Bengio, 2017).…”
Section: What Kind Of Explanation Do Dcnns Provide?mentioning
confidence: 99%
“…This latching, along with contrastive Hebbian learning (Ackley et al, 1987;Anderson and Peterson, 1987;Hinton and McClelland, 1988;O'Reilly, 1996;Xie and Seung, 2003) --or with another contrastive learning rule such as (Scellier and Bengio, 2017) or (Guerguiev et al, 2017), or more generally with any learning rule that relies on clamping of neurons at the output layer of a network to provide a supervision signal which is propagated back to earlier layers -would cause the cortical DNN feeding this prospective face patch to learn to associate the various views of the attended face with that particular latched activation pattern. This method of learning view-invariance is really just an extension of the 'trace rule' (Földiák, 1991;Wallis et al, 1993) but one that takes advantage of computations and behavioral knowledge available elsewhere in the brain.…”
Section: Roles For Supervised Training In the Brainmentioning
confidence: 99%
“…The suggestion that the basal ganglia orchestrates training of cortical networks derives from (Ashby et al, 2010) and was revisited in (Pyle and Rosenbaum, 2018), and the notion that hippocampal information is consolidated into cortex for permanent storage is prevalent in works such as (Atallah et al, 2004;Kumaran et al, 2016;McClelland et al, 1995). Finally, the idea of contrastive training of neural networks, and of providing supervision signals to a network by clamping or perturbing an output layer has been important in the machine learning and computational cognitive science fields for some time (e.g., (O'Reilly, 1996;Xie and Seung, 2003)) and continues to be today (e.g., (Guerguiev et al, 2017;Scellier and Bengio, 2017)). Our overall architecture linking deep hierarchies, symbolic intermediates supporting discrete operations, and reinforcement learning, is reminiscent of (Garnelo et al, 2016).…”
Section: Relations With Other Proposalsmentioning
confidence: 99%