2022
DOI: 10.1101/2022.03.17.484712
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

The combination of Hebbian and predictive plasticity learns invariant object representations in deep sensory networks

Abstract: Discriminating distinct objects and concepts from sensory stimuli is essential for survival. Our brains accomplish this feat by forming meaningful internal representations in deep sensory networks with plastic synaptic connections. Experience-dependent plasticity presumably exploits temporal contingencies between sensory inputs to build these internal representations. However, the precise mechanisms underlying plasticity remain elusive. We derive a local synaptic plasticity model inspired by self-supervised ma… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
21
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 16 publications
(25 citation statements)
references
References 103 publications
(292 reference statements)
1
21
0
Order By: Relevance
“…From this optimization problem we derived a learning rule which gave rise to experimentally observed STDP mechanisms. Our results, together with previous studies [69, 70, 71], suggest that STDP is a consequence of a general learning rule given the particular state of the system, the stimulation protocol and the specific properties of the input. As a consequence, several STDP learning windows which are described by other phenomenological rules are predicted by our model, as well as the dependence on synaptic strength and depolarization level.…”
Section: Discussionsupporting
confidence: 82%
“…From this optimization problem we derived a learning rule which gave rise to experimentally observed STDP mechanisms. Our results, together with previous studies [69, 70, 71], suggest that STDP is a consequence of a general learning rule given the particular state of the system, the stimulation protocol and the specific properties of the input. As a consequence, several STDP learning windows which are described by other phenomenological rules are predicted by our model, as well as the dependence on synaptic strength and depolarization level.…”
Section: Discussionsupporting
confidence: 82%
“…Networks trained on the static frames of the sequences, in which activity was reset after each frame also lacked a block-diagonal structure (Fig S2), illustrating the role of continuous motion in the training paradigm, which is to provide the necessary temporal structure in which subsequent inputs can be assumed to be caused by the same objects. The Hebbian learning rule thus groups together consecutive inputs in a manner reminiscent of contrastive, self-supervised methods (34,35,58) that explicitly penalize dissimilarity in the loss function. Here, the higher-level representation from the previous timestep provides a target for the consecutive inputs reminiscent of implementations of supervised learning with local learning rules (59–61).…”
Section: Resultsmentioning
confidence: 99%
“…At the same time, the model acquired generative capacity that enables reconstruction of partially occluded stimuli, in line with retinotopic and content-carrying feedback connections to V1 ( (13,36), see also (3) for a review of predictive feedback mechanisms). Other neuronlevel models of invariance-learning (28,29,31,34) neither account for such feedback nor experimentally observed explicit encoding of mismatch between prediction and observation (10,35) and used considerably more complex learning rules requiring a larger set of assumptions (34). By solving the task of invariance learning in agreement with the generativity of sensory cortical systems, the claim for predictive coding circuits as fundamental building blocks of the brain's perceptual pathways is strengthened We argue that the model generalizes predictive coding to moving stimuli in a biologically more plausible way than other approaches (16,42,46) that rely on non-local error backpropagation (41) or backpropagation through time (46).…”
Section: A Generative Model To Learn Invariant Representationsmentioning
confidence: 99%
See 1 more Smart Citation
“…Several training rules for artificial neural networks (ANNs) have been proposed to break the weight symmetry constraint required by BP (Lillicrap et al, 2016b;Nøkland, 2016;Liao et al, 2016;Nøkland & Eidnes, 2019;Frenkel et al, 2021;Hazan et al, 2018;Kohan et al, 2018;Clark et al, 2021;Meulemans et al, 2021;Halvagal & Zenke, 2022;Journé et al, 2022). The Feedback Alignment (FA) algorithm (Lillicrap et al, 2016b) replaces the transposed forward weights W in the feedback path with random, fixed (nonlearning) weight matrices F , thereby solving the weight transport problem (Fig.…”
Section: Feedback Alignment and Weight Mirroringmentioning
confidence: 99%