2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2022
DOI: 10.1109/cvpr52688.2022.00730
|View full text |Cite
|
Sign up to set email alerts
|

Energy-based Latent Aligner for Incremental Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
2
2

Relationship

0
8

Authors

Journals

citations
Cited by 17 publications
(6 citation statements)
references
References 21 publications
0
6
0
Order By: Relevance
“…Regularized-based methods [20]- [24] strive to search the important parameters to the original tasks and constrain their changes in the coming tasks. Elastic Weight Consolidation (EWC) [20] computes the importance of each parameter via a diagonal approximation of the Fisher Information Matrix, and slows down the learning of each parameter selectively.…”
Section: Parametric Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Regularized-based methods [20]- [24] strive to search the important parameters to the original tasks and constrain their changes in the coming tasks. Elastic Weight Consolidation (EWC) [20] computes the importance of each parameter via a diagonal approximation of the Fisher Information Matrix, and slows down the learning of each parameter selectively.…”
Section: Parametric Methodsmentioning
confidence: 99%
“…Riemania Walk (RWalk) [23] calculates the parameter importance by fusing Fisher Information Matrix approximation and online path integral with a theoretically grounded KL-divergence based perspective. ELI [24] learns an energy manifold for the latent representations to counter the representational shift during Incremental learning. The advantage of the regularized-based method is that they do not need to store the samples of old tasks, i.e., exemplar samples.…”
Section: Parametric Methodsmentioning
confidence: 99%
“…There are three main lines of work to address the stability-plasticity trade-off in CIL. Distillation-based methods [8,15,16,21,27,37,41] introduce different knowledge distillation (KD) losses to consolidate previous knowledge when training the model on new data. The key idea is to enforce model prediction logits [21,30], feature maps [8,14], or topologies in the feature space [36] to be close to those of the pre-phase model.…”
Section: Related Workmentioning
confidence: 99%
“…The pioneering work is LwF [6], which proposes an incremental learning method without catastrophic forgetting and conducts extensive experiments. In follow-up studies, researchers try various incremental learning strategies including replay-based methods [35], [36], [37], architecturebased methods [38], [39], [40], and regularization-based methods [41], [42], [43]. Replay-based incremental learning means retaining the exemplars of the old classes in the new task learning.…”
Section: B Incremental Learningmentioning
confidence: 99%