2022
DOI: 10.48550/arxiv.2201.06534
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Logarithmic Continual Learning

Abstract: We introduce a neural network architecture that logarithmically reduces the number of self-rehearsal steps in the generative rehearsal of continually learned models. In continual learning (CL), training samples come in subsequent tasks, and the trained model can access only a single task at a time. To replay previous samples, contemporary CL methods bootstrap generative models and train them recursively with a combination of current and regenerated past data. This recurrence leads to superfluous computations a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 9 publications
0
1
0
Order By: Relevance
“…In the classic paradigm, model retraining using the new data leads to a sharp decrease in performance on previously learned tasks, a phenomenon known as Catastrophic Forgetting(CF) [Goodfellow et al, 2013]. The related literature suggests that the Rehearsal (also know as replay-based) approach appears to be a strong solution to CF [Pellegrini et al, 2020, Buzzega et al, 2021, Masarczyk et al, 2022, Kim et al, 2020. To the best of our knowledge, currently, only two main replay-based approaches designed for Multi-Label classification are proposed in the literature:…”
Section: Related Workmentioning
confidence: 99%
“…In the classic paradigm, model retraining using the new data leads to a sharp decrease in performance on previously learned tasks, a phenomenon known as Catastrophic Forgetting(CF) [Goodfellow et al, 2013]. The related literature suggests that the Rehearsal (also know as replay-based) approach appears to be a strong solution to CF [Pellegrini et al, 2020, Buzzega et al, 2021, Masarczyk et al, 2022, Kim et al, 2020. To the best of our knowledge, currently, only two main replay-based approaches designed for Multi-Label classification are proposed in the literature:…”
Section: Related Workmentioning
confidence: 99%