2020
DOI: 10.48550/arxiv.2007.02686
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Meta-Learning through Hebbian Plasticity in Random Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

1
18
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
3

Relationship

1
6

Authors

Journals

citations
Cited by 9 publications
(19 citation statements)
references
References 0 publications
1
18
0
Order By: Relevance
“…Synaptic plasticity is a powerful mechanism for unsupervised learning in neural networks, inspired by learning processes in the biological brain [1,2,3,4,5]. This process has been incorporated into spiking and artificial neural networks to enable intra-lifetime learning [8,9,10,11,12,13]. However, in this work it was shown that plastic ANNs struggle to generalize their behavior beyond the training time horizon.…”
Section: Discussionmentioning
confidence: 96%
See 2 more Smart Citations
“…Synaptic plasticity is a powerful mechanism for unsupervised learning in neural networks, inspired by learning processes in the biological brain [1,2,3,4,5]. This process has been incorporated into spiking and artificial neural networks to enable intra-lifetime learning [8,9,10,11,12,13]. However, in this work it was shown that plastic ANNs struggle to generalize their behavior beyond the training time horizon.…”
Section: Discussionmentioning
confidence: 96%
“…In the field of Artificial Intelligence (AI), the primary focus of research with ANNs has been on discovering static solutions, where the synaptic weights remain constant throughout the lifetime of the organism. Inspired by the biological brain, a rich history of work has demonstrated the design of ANNs together with synaptic plasticity [8,9,10,11,12,13], referred to as Plastic Artificial Neural Networks (PANNs). These networks have shown impressive capabilities in their ability to generalize to novel environmental circumstances, recover from limb damage, enhance memory [11,14,5].…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Modern deep learning systems are generally unable to adapt to a sudden reordering of sensory inputs, unless the model is retrained, or if the user manually corrects the ordering of the inputs for the model. However, techniques from continual meta-learning, such as adaptive weights [2,35,64], Hebbian-learning [51,52,56], and model-based [1,19,36,37] approaches can help the model adapt to such changes, and remain a promising active area of research.…”
Section: Introductionmentioning
confidence: 99%
“…procedures operating over local data. As an example of the latter, it is possible to meta-learn local iterative rules [9], [10], initializations [11]- [13], or learning rates [14].…”
Section: Introductionmentioning
confidence: 99%