2022
DOI: 10.1038/s41467-022-34938-7
|View full text |Cite
|
Sign up to set email alerts
|

Sleep-like unsupervised replay reduces catastrophic forgetting in artificial neural networks

Abstract: Artificial neural networks are known to suffer from catastrophic forgetting: when learning multiple tasks sequentially, they perform well on the most recent task at the expense of previously learned tasks. In the brain, sleep is known to play an important role in incremental learning by replaying recent and old conflicting memory traces. Here we tested the hypothesis that implementing a sleep-like phase in artificial neural networks can protect old memories during new training and alleviate catastrophic forget… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
4
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
7
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 12 publications
(5 citation statements)
references
References 73 publications
(101 reference statements)
0
4
0
Order By: Relevance
“…An artificial neural network consists of artificial neurons, which are actually processing elements. Each neuron has several inputs and it assigned a weight to each input 24 , 25 . The average output of each neuron is obtained from the sum of all inputs multiplied by the weights.…”
Section: Methodsmentioning
confidence: 99%
“…An artificial neural network consists of artificial neurons, which are actually processing elements. Each neuron has several inputs and it assigned a weight to each input 24 , 25 . The average output of each neuron is obtained from the sum of all inputs multiplied by the weights.…”
Section: Methodsmentioning
confidence: 99%
“…(Hammouamri, Masquelier, and Wilson 2022) achieves continual learning in SNNs by training an external network using evolutionary strategies to generate the firing threshold of classifer. (Tadros et al 2022) implements the local plasticity to help the model to correct bias after the learning process of new tasks by using a conversion algorithm to switch between rate-coding and spike-coding. In addition, ANN-oriented methods, such as (Bricken et al 2023) and (Shen, Dasgupta, and Navlakha 2021), have investigated the mechanisms similar to the selective activated Top-K function and provided efficient solutions for continual learning with ANNs model.…”
Section: Related Workmentioning
confidence: 99%
“…A fully-connected ANN with two hidden layers was first trained on a randomly selected subset of MNIST or Fashion MNIST (FMNIST) datasets using backpropagation. Subsequently, the sleep replay consolidation (SRC) algorithm was implemented as previously described in (Tadros et al 2022). Briefly (see Supplementary Material for details), the ANN trained by limited data was mapped to a spiking neural network (SNN) with the same architecture.…”
Section: Algorithmmentioning
confidence: 99%
“…After the sleep phase, the SNN was remapped back to an ANN. In (Tadros et al 2022) SRC was applied after each new task training to avoid catastrophic forgetting, here we applied it once after training with limited data.…”
Section: Algorithmmentioning
confidence: 99%