2022
DOI: 10.1016/j.neunet.2022.09.003
|View full text |Cite
|
Sign up to set email alerts
|

Continuous learning of spiking networks trained with local rules

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
10
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
1
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(10 citation statements)
references
References 47 publications
0
10
0
Order By: Relevance
“…As for other SNNs-based methods for continual learning, (Antonov, Sviatov, and Sukhov 2022) determines the importance of synaptic weights via stochastic Langevin dynamics with local STDP and achieved continual learning by unsupervised learning. (Skatchkovsky, Jang, and Simeone 2022) introduced an online rule base on the Bayesian SNN model.…”
Section: Related Workmentioning
confidence: 99%
“…As for other SNNs-based methods for continual learning, (Antonov, Sviatov, and Sukhov 2022) determines the importance of synaptic weights via stochastic Langevin dynamics with local STDP and achieved continual learning by unsupervised learning. (Skatchkovsky, Jang, and Simeone 2022) introduced an online rule base on the Bayesian SNN model.…”
Section: Related Workmentioning
confidence: 99%
“…Input-output relationship of a push-pull LIF neuron pair in the average spike rate domain. Simulated using a single spiking input connected with a weight of 1 with illustration parameters τ s = 0.01s, µ = 0.15, τ m = 0.002s in (11), (13).…”
Section: Spiking Neural Network and Stdp Learningmentioning
confidence: 99%
“…In recent years, the remarkable ability of biological entities for continual learning has triggered research in the emerging field of neuromorphic continual learning, which seeks to study how bio-plausible neural network architectures such as Hebbian learning networks and Spiking Neural Networks (SNNs) equipped with Spike-Timing-Dependent Plasticity (STDP) can be used for designing continual-learning systems without suffering from the compute-and memory-intensive overheads of traditional Deep Learning systems [9,10,11,12,13,14,15]. Following these realisations, a number of works covering a wide range of applications have been proposed, from task-and classincremental learning setups [9] to the continual learning of object detection and robot navigation [6,8].…”
Section: Introductionmentioning
confidence: 99%
“…However, weight training in DNNs typically relies on the backpropagation algorithm, which is a powerful method but energy-intensive and sensitive to non-idealities in 2 the weight updates of memristors [29,31,32]. Moreover, the biological plausibility of backpropagation remains an open question [33].…”
Section: Introductionmentioning
confidence: 99%