2021
DOI: 10.48550/arxiv.2109.00328
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Memory-Free Generative Replay For Class-Incremental Learning

Xiaomeng Xin,
Yiran Zhong,
Yunzhong Hou
et al.

Abstract: Regularization-based methods are beneficial to alleviate the catastrophic forgetting problem in class-incremental learning. With the absence of old task images, they often assume that old knowledge is well preserved if the classifier produces similar output on new images. In this paper, we find that their effectiveness largely depends on the nature of old classes: they work well on classes that are easily distinguishable between each other but may fail on more fine-grained ones, e.g., boy and girl. In spirit, … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(3 citation statements)
references
References 23 publications
0
3
0
Order By: Relevance
“…Recent popular CIL methods can be categorized into three classes: replay-based, regularization-based, and parameter-isolation-based [4,25,17]. Replay-based methods preserve a small amount of data in previous tasks in memory or a generative model in order to replay these data when training the model on new data to overcome catastrophic forgetting [30,18,39,10,37]. The regularization-based methods provide terms in the final loss to restrict the model with prior or knowledge distillation to change too much to forget previous knowledge [12,14,43,15,9,6].…”
Section: Related Work 21 Class Incremental Learningmentioning
confidence: 99%
See 1 more Smart Citation
“…Recent popular CIL methods can be categorized into three classes: replay-based, regularization-based, and parameter-isolation-based [4,25,17]. Replay-based methods preserve a small amount of data in previous tasks in memory or a generative model in order to replay these data when training the model on new data to overcome catastrophic forgetting [30,18,39,10,37]. The regularization-based methods provide terms in the final loss to restrict the model with prior or knowledge distillation to change too much to forget previous knowledge [12,14,43,15,9,6].…”
Section: Related Work 21 Class Incremental Learningmentioning
confidence: 99%
“…Recently, most class incremental approaches fight against catastrophic forgetting by replaying the seen data [30,18,39] or distilling the previous model [15,9,6]. Although distillation and replaying can help to balance plasticity and stability, the representation model learned from the last phase is still needed to be fine-tuned in the next phase endlessly, affecting the whole body of the model, bringing lots of forgetting when just learning a few new concepts, so that limiting the performance of class incremental learning.…”
Section: Introductionmentioning
confidence: 99%
“…[4,20] alternatively uses a GAN architecture to synthesize images, where they fix a trained network as a discriminator and optimize a generator to derive images that can be adopted to distill knowledge from the fixed network to a new network. Recently, [26,30,29] have integrated the idea of generative reply into addressing class-incremental learning problem using Deep-Inversion. However, due to the more challenging setting of FSCIL, these approaches cannot be simply migrated to handle the few-shot scenario.…”
Section: Data-free Knowledge Distillationmentioning
confidence: 99%