2020
DOI: 10.48550/arxiv.2008.13710
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Initial Classifier Weights Replay for Memoryless Class Incremental Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(4 citation statements)
references
References 0 publications
0
4
0
Order By: Relevance
“…After the training of each new task, the classifier of each previous task is replaced by a scaled version of its initial stored classifier with the help of aggregate statistics. Another similar method [94] was also proposed to standardize the stored initial classifier of each task, resulting in a fair and balanced classification among different tasks. Chaudhry et al [92] proposed Hindsight Anchor Learning (HAL) to store an anchor per task in addition to data instances.…”
Section: It Uses Classification Uncertainty and Data Augmentation To ...mentioning
confidence: 99%
“…After the training of each new task, the classifier of each previous task is replaced by a scaled version of its initial stored classifier with the help of aggregate statistics. Another similar method [94] was also proposed to standardize the stored initial classifier of each task, resulting in a fair and balanced classification among different tasks. Chaudhry et al [92] proposed Hindsight Anchor Learning (HAL) to store an anchor per task in addition to data instances.…”
Section: It Uses Classification Uncertainty and Data Augmentation To ...mentioning
confidence: 99%
“…SI [12] uses the path integral over the optimization trajectory. Recently, Belouadah et al [24], [25] assumed that all weights of the model should be normalized across tasks and standardized the initial classifier weights. Different from per-parameter regularization, LwF [11] adds regularization on top of the network output.…”
Section: Related Work a Class Incremental Learningmentioning
confidence: 99%
“…Therefore, there is a need for IL with a reasonable balance between accuracy, memory consumption, and training efficiency. In IoT, less or zero memory for historical data are preferred during the continuous evolving [11].…”
Section: Introductionmentioning
confidence: 99%
“…Other works require incrementally training task-related generative models for knowledge replay, but these generative models require notorious efforts [13]. In addition, there are several attempts to either use regularization or knowledge distillation to implement memoryless methods to prevent DNNs from forgetting [11]. Balancing between learning and forgetting is difficult, especially when the internal mechanism of catastrophic forgetting is not yet clear.…”
Section: Introductionmentioning
confidence: 99%