2019
DOI: 10.1609/aaai.v33i01.33011352
|View full text |Cite
|
Sign up to set email alerts
|

Scalable Recollections for Continual Lifelong Learning

Abstract: Given the recent success of Deep Learning applied to a variety of single tasks, it is natural to consider more human-realistic settings. Perhaps the most difficult of these settings is that of continual lifelong learning, where the model must learn online over a continuous stream of non-stationary data. A successful continual lifelong learning system must have three key capabilities: it must learn and adapt over time, it must not forget what it has learned, and it must be efficient in both training time and me… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
24
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 42 publications
(26 citation statements)
references
References 29 publications
1
24
0
Order By: Relevance
“…At the same time, more training data usually means lower training efficiency. These challenges could be alleviated by optimizing data storage methods [ 99 ], but it still cannot be completely overcome. If the memory capacity is limited [ 17 ], the sample size of a single knowledge category will gradually decrease with the accumulation of tasks, and its impact on the model will gradually decrease.…”
Section: Discussion and Comparisonmentioning
confidence: 99%
See 1 more Smart Citation
“…At the same time, more training data usually means lower training efficiency. These challenges could be alleviated by optimizing data storage methods [ 99 ], but it still cannot be completely overcome. If the memory capacity is limited [ 17 ], the sample size of a single knowledge category will gradually decrease with the accumulation of tasks, and its impact on the model will gradually decrease.…”
Section: Discussion and Comparisonmentioning
confidence: 99%
“…Guo et al [ 98 ] proposed an example set exemplar-based subspace clustering method. Riemer et al [ 99 ] used an autoencoder based model to support scalable data storage and retrieval for scalable old data.…”
Section: Methods Descriptionmentioning
confidence: 99%
“…Instead of storing raw samples as exemplars, Shin et al (2017); Riemer et al (2019) generate "pseudo'' samples akin to past data. The NLG model itself can generate pseudo exemplars.…”
Section: B1 Comparison To Pseudo Exemplar Replaymentioning
confidence: 99%
“…A few methods reduce this storage requirement by storing a compressed representation of the data, including Lifelong Generative Modeling (Ramapuram et al, 2017 ), FearNet (Kemker and Kanan, 2018 ), Deep Generative Replay (Shin et al, 2017 ), and Expert Gates (Aljundi et al, 2017 ). Some methods specifically use autoencoders for this task (Zhou et al, 2012 ; Riemer et al, 2017 ; Parisi et al, 2018b ). However, even when compressed, data requires significantly more parameters to store than parametric networks.…”
Section: Prior Workmentioning
confidence: 99%