2021
DOI: 10.48550/arxiv.2106.09701
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Always Be Dreaming: A New Approach for Data-Free Class-Incremental Learning

Abstract: Modern computer vision applications suffer from catastrophic forgetting when incrementally learning new concepts over time. The most successful approaches to alleviate this forgetting require extensive replay of previously seen data, which is problematic when memory constraints or data legality concerns exist. In this work, we consider the high-impact problem of Data-Free Class-Incremental Learning (DFCIL), where an incremental learning agent must learn new concepts over time without storing generators or trai… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
8
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(8 citation statements)
references
References 35 publications
0
8
0
Order By: Relevance
“…Fig. 6 reports the top-1 accuracy of task 2 on old classes (0-19), new classes (20)(21)(22)(23)(24)(25)(26)(27)(28)(29)(30)(31)(32)(33)(34)(35)(36)(37)(38)(39), and all seen classes (0-39) with four different ratios. From Fig.…”
Section: Methods Analysismentioning
confidence: 99%
See 1 more Smart Citation
“…Fig. 6 reports the top-1 accuracy of task 2 on old classes (0-19), new classes (20)(21)(22)(23)(24)(25)(26)(27)(28)(29)(30)(31)(32)(33)(34)(35)(36)(37)(38)(39), and all seen classes (0-39) with four different ratios. From Fig.…”
Section: Methods Analysismentioning
confidence: 99%
“…That means if we only pro-vide a pre-trained model for the starting task, i.e., our problem setting, we would not be able to leverage this strategy. To tackle this issue, [29] and [6] employ synthetic images that generates by data-free knowledge distillation methods [5,32] to alleviate catastrophic forgetting. Comparing with them, although we also utilize such methods, we systematically show that regularization-based methods inherently suffer catastrophic forgetting on fine-grained classes.…”
Section: Related Workmentioning
confidence: 99%
“…A global model is shared with clients to avoid data leaking. However, data-free knowledge transfer and model inversion techniques (Yin et al 2020;Smith et al 2021;Yin et al 2021) can recover data from the pre-trained model and thus can be used to attack shared models in federated learning. Knowledge distillation utilises the domainexpert teacher model to train a compact student model with pursuing competitive recognition accuracy (Xu, Liu, and Loy 2020;Chen et al 2021).…”
Section: Related Workmentioning
confidence: 99%
“…[32] exploits publicly available training data. [3,26] train generative models to extract synthetic samples.…”
Section: Related Workmentioning
confidence: 99%
“…the model's parameters, that can be used to learn more efficiently. Lastly, (4) is explored in CL with settings such as data-free class-incremental scenarios [3,26], where access to the previous data is forbidden. Again, this scenario assumes a single agent and access to the current data, making it difficult to share knowledge between multiple agents.…”
Section: Introductionmentioning
confidence: 99%