2020 IEEE Winter Conference on Applications of Computer Vision (WACV) 2020
DOI: 10.1109/wacv45572.2020.9093562
|View full text |Cite
|
Sign up to set email alerts
|

ScaIL: Classifier Weights Scaling for Class Incremental Learning

Abstract: Incremental Learning (IL) is useful when artificial systems need to deal with streams of data and do not have access to all data at all times. The most challenging setting requires a constant complexity of the deep model and an incremental model update without access to a bounded memory of past data. Then, the representations of past classes are strongly affected by catastrophic forgetting. To mitigate its negative effect, an adapted fine tuning which includes knowledge distillation is usually deployed. We pro… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

4
75
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
5
2
2

Relationship

0
9

Authors

Journals

citations
Cited by 68 publications
(80 citation statements)
references
References 23 publications
4
75
0
Order By: Relevance
“…The use of distillation loss is detrimental if exemplars per old class are allowed. A similar result is presented in ScaIL [45], and more experiments about the effect of distillation are analyzed. The authors made a hypothesis that the detriment is caused during the incremental learning process, and these errors are caused by class imbalance.…”
Section: B Main Componentssupporting
confidence: 57%
See 1 more Smart Citation
“…The use of distillation loss is detrimental if exemplars per old class are allowed. A similar result is presented in ScaIL [45], and more experiments about the effect of distillation are analyzed. The authors made a hypothesis that the detriment is caused during the incremental learning process, and these errors are caused by class imbalance.…”
Section: B Main Componentssupporting
confidence: 57%
“…Most data replay-based incremental learning methods [22], [43], [44] follow the iCaRL experiment benchmark protocol to arrange classes and select exemplars. In ScaIL [45], the experimental results show that exemplar selection based on herding can improve performance.…”
Section: B Main Componentsmentioning
confidence: 99%
“…In this sense, Incremental Learning with Dual Memory (LI2M) [47] corrects scores of old classes storing their statistical information in an additional memory. Classifier Weights Scaling for Class Incremental Learning (ScaIL) [51] rectifies the weights of old classes to make them more comparable to those of new classes. Zhao et al [36] proposed Weight Aligning (WA) to correct the biased weights at the output layer once the training process has ended.…”
Section: Related Workmentioning
confidence: 99%
“…Therefore, there is a need for IL with a reasonable balance between accuracy, memory consumption, and training efficiency. In IoT, less or zero memory for historical data are preferred during the continuous evolving [11].…”
Section: Introductionmentioning
confidence: 99%