2022
DOI: 10.1609/aaai.v36i8.20837
|View full text |Cite
|
Sign up to set email alerts
|

Continual Learning through Retrieval and Imagination

Abstract: Continual learning is an intellectual ability of artificial agents to learn new streaming labels from sequential data. The main impediment to continual learning is catastrophic forgetting, a severe performance degradation on previously learned tasks. Although simply replaying all previous data or continuously adding the model parameters could alleviate the issue, it is impractical in real-world applications due to the limited available resources. Inspired by the mechanism of the human brain to deepen its past … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
3
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
3
3
3

Relationship

2
7

Authors

Journals

citations
Cited by 13 publications
(4 citation statements)
references
References 39 publications
0
3
0
Order By: Relevance
“…Replay methods preserve the sample information from old tasks and utilize it for the training process of new tasks to mitigate catastrophic forgetting [10][11][12][13][14][15][16][17][18][19][20][21][22] . iCaRL 15 selects some representative samples from old datasets and adds them to the dataset of new tasks.…”
Section: Replay Methodsmentioning
confidence: 99%
“…Replay methods preserve the sample information from old tasks and utilize it for the training process of new tasks to mitigate catastrophic forgetting [10][11][12][13][14][15][16][17][18][19][20][21][22] . iCaRL 15 selects some representative samples from old datasets and adds them to the dataset of new tasks.…”
Section: Replay Methodsmentioning
confidence: 99%
“…Data replay-based methods interleaves data related to previous tasks with new data so that the model is 'reminded' of the old knowledge. One way to obtain the old data is by training deep generative models to synthesize 'fake' samples that mimics data from previous classes (Shin et al 2017;Wu et al 2018;Rao et al 2019;Wang et al 2022) but there is the issue of generating realistic data samples. Another way is to directly store a small number of previous class samples to train together with the new data (Rebuffi et al 2017;Chaudhry et al 2019).…”
Section: Related Workmentioning
confidence: 99%
“…Continual Learning (CL) studies the problem of learning new knowledge incrementally while preserving previously learned knowledge. Following [10], CL methods can be divided into three groups: rehearsal-based methods [30,44,36,3,37,11], regularizationbased methods [18,20,22,19,42], and parameter isolation methods [31,32,28]. However, the majority of existing CL methods focus on supervised settings while Continual Self-Supervised Continual Learning (CSSL) is surprisingly under-investigated.…”
Section: Continual Learningmentioning
confidence: 99%