2020
DOI: 10.1101/2020.06.17.156513
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

One-shot learning with spiking neural networks

Abstract: Understanding how one-shot learning can be accomplished through synaptic plasticity in neural networks of the brain is a major open problem. We propose that approximations to BPTT in recurrent networks of spiking neurons (RSNNs) such as e-prop cannot achieve this because their local synaptic plasticity is gated by learning signals that are rather ad hoc from a biological perspective: Random projections of instantaneously arising losses at the network outputs, analogously as in Broadcast Alignment for feedforwa… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

2
22
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
3

Relationship

1
7

Authors

Journals

citations
Cited by 15 publications
(24 citation statements)
references
References 25 publications
2
22
0
Order By: Relevance
“…Option 3 is used by the MAML approach of [336], where only very few updates of synaptic weights via BPTT are required in the inner loop of L2L. It also occurs in [325] in conjunction with option 4, see figure 24 for an illustration.…”
Section: Current and Future Challengesmentioning
confidence: 99%
“…Option 3 is used by the MAML approach of [336], where only very few updates of synaptic weights via BPTT are required in the inner loop of L2L. It also occurs in [325] in conjunction with option 4, see figure 24 for an illustration.…”
Section: Current and Future Challengesmentioning
confidence: 99%
“…In this direction, short-term synaptic plasticity [8], [9] and neuronal adaptation [7] have been exploited. One-shot learning of SNNs has been studied in [27]. Instead of Hebbian plasticity, this model relied on a more elaborate three-factor learning rule.…”
Section: Discussionmentioning
confidence: 99%
“…While standard deep learning approaches need large numbers of training examples during training, humans can learn new concepts based on a single exposure (one-shot learning) [24]. A large number of few-shot learning approaches have been proposed using artificial neural networks [25], [26], but biologically plausible SNN models are very rare [27]. We wondered whether Hebbian plasticity could endow SNNs with one-shot learning capabilities.…”
Section: One-shot Learningmentioning
confidence: 99%
“…We first consider the family of omniglot 2-way classification tasks [36], which is often used to test the within-task generalization capabilities of meta-learners [6], [7], [11], [29]. We We train a fully connected GLM-based SNN with 4 hidden neurons and 2 visible neurons with additional lateral connections between the hidden neurons and feedback connections from the visible neurons to the hidden neurons.…”
Section: Methodsmentioning
confidence: 99%