2021 IEEE/CVF International Conference on Computer Vision (ICCV) 2021
DOI: 10.1109/iccv48922.2021.00924
|View full text |Cite
|
Sign up to set email alerts
|

Always Be Dreaming: A New Approach for Data-Free Class-Incremental Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
5

Citation Types

0
22
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 89 publications
(32 citation statements)
references
References 26 publications
0
22
0
Order By: Relevance
“…Avoiding privacy concerns, this work also follows a line of work that doesn't store real examples for experience replay, such as generating examples by GAN (Atkinson et al, 2018), synthesizing examples (Xu et al, 2022) by model-inversion (Smith et al, 2021b), and using unlabeled data in the learning environment (Smith et al, 2021a). In language domain, LAMOL (Sun et al, 2019) trains the language model to solve current tasks and generate current training examples simultaneously, then this model can generate "pseudo" old examples for replay before any new tasks.…”
Section: Related Workmentioning
confidence: 99%
“…Avoiding privacy concerns, this work also follows a line of work that doesn't store real examples for experience replay, such as generating examples by GAN (Atkinson et al, 2018), synthesizing examples (Xu et al, 2022) by model-inversion (Smith et al, 2021b), and using unlabeled data in the learning environment (Smith et al, 2021a). In language domain, LAMOL (Sun et al, 2019) trains the language model to solve current tasks and generate current training examples simultaneously, then this model can generate "pseudo" old examples for replay before any new tasks.…”
Section: Related Workmentioning
confidence: 99%
“…Continual learning focuses on alleviating catastrophic forgetting while being discriminative for newly learned classes. To solve this problem, many work [5,6,12,48,78] propose to review knowledge by rehearsal-based mechanism. The knowledge can be stored by multiple types, like examples [5,7,10,12,74,84], prototypes [36,107,108], generative networks [61], etc.…”
Section: Related Workmentioning
confidence: 99%
“…In comparison, distillation-based approaches can be directly applied to continual learning of new classes by distilling knowledge from the old classifier (for old classes) to the new classifier (for both new and old classes) during learning new knowledge [18,19,37,38,39,40,41,42,43], where the old knowledge is often implicitly represented by soft outputs of old classifier with stored small amount of old images and/or new classes of images as the inputs. A distillation loss is added to the original cross-entropy loss during training the new classifier, where the distillation loss helps the new classifier have similar relevant output compared to the output of the old classifier for any input image.…”
Section: Related Workmentioning
confidence: 99%