2022
DOI: 10.48550/arxiv.2201.00766
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Class-Incremental Continual Learning into the eXtended DER-verse

Matteo Boschini,
Lorenzo Bonicelli,
Pietro Buzzega
et al.

Abstract: The staple of human intelligence is the capability of acquiring knowledge in a continuous fashion. In stark contrast, Deep Networks forget catastrophically and, for this reason, the sub-field of Class-Incremental Continual Learning fosters methods that learn a sequence of tasks incrementally, blending sequentially-gained knowledge into a comprehensive prediction. This work aims at assessing and overcoming the pitfalls of our previous proposal Dark Experience Replay (DER), a simple and effective approach that c… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
0
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(18 citation statements)
references
References 36 publications
0
0
0
Order By: Relevance
“…Among all approaches for continual learning, the rehearsal-based approaches are the most widely used due to their convenience and effectiveness. In order to tackle the catastrophic forgetting issue, rehearsal-based approaches [27,11,9,2,19,31,22,24,26,7,1,5,8,6,10,4] attempt to preserve the old knowledge from all previous tasks by a memory buffer and replay it when the model learns on a new task. [27] propose to save a small subset of old data and replay it in new tasks.…”
Section: Related Workmentioning
confidence: 99%
See 3 more Smart Citations
“…Among all approaches for continual learning, the rehearsal-based approaches are the most widely used due to their convenience and effectiveness. In order to tackle the catastrophic forgetting issue, rehearsal-based approaches [27,11,9,2,19,31,22,24,26,7,1,5,8,6,10,4] attempt to preserve the old knowledge from all previous tasks by a memory buffer and replay it when the model learns on a new task. [27] propose to save a small subset of old data and replay it in new tasks.…”
Section: Related Workmentioning
confidence: 99%
“…Concretely, rehearsal-based approaches attempt to preserve the old knowledge by a continuously updated memory buffer where limited samples are selected from all previous tasks and then available in the new task. Among recent advances of rehearsal-based methods, most of them focus on designing a proper updating strategy of the buffer [26,27,19,6] and exploiting the buffer data with extra regularization constraints [26,7,6,24,6]. Although these efforts indeed contribute to mitigating catastrophic forgetting, the main cause of this challenge for rehearsal-based CIL is still not fully addressed.…”
Section: Introductionmentioning
confidence: 99%
See 2 more Smart Citations
“…DualHSIC is a general framework that can be combined with almost any mainstream rehearsal-based methods. Therefore, we incorporate DualHSIC into multiple SOTA rehearsal-based methods, including ER [26], DER++ [19], X-DER-RPC [16], and ER-ACE [20],…”
Section: Experiments Settingmentioning
confidence: 99%