2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2021
DOI: 10.1109/cvpr46437.2021.00257
|View full text |Cite
|
Sign up to set email alerts
|

Adaptive Aggregation Networks for Class-Incremental Learning

Abstract: Class-Incremental Learning (CIL) aims to learn a classification model with the number of classes increasing phase-by-phase. The inherent problem in CIL is the stability-plasticity dilemma between the learning of old and new classes, i.e., high-plasticity models easily forget old classes but high-stability models are weak to learn new classes. We alleviate this issue by proposing a novel network architecture called Meta-Aggregating Networks (MANets) in which we explicitly build two residual blocks at each resid… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
108
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 138 publications
(132 citation statements)
references
References 50 publications
0
108
0
Order By: Relevance
“…Baseline: We evaluate representative memory replay approaches such as LwF (Li & Hoiem, 2017), iCaRL (Rebuffi et al, 2017, BiC (Wu et al, 2019), LUCIR (Hou et al, 2019), Mnemonics (Liu et al, 2020b), TPCIL (Tao et al, 2020a), PODNet (Douillard et al, 2020), DDE (Hu et al, 2021) and AANets (Liu et al, 2021a). In particular, AANets and DDE are the recent strong approaches implemented on the backbones of LUCIR and PODNet, so we also implement ours on the two backbones.…”
Section: Methodsmentioning
confidence: 99%
See 3 more Smart Citations
“…Baseline: We evaluate representative memory replay approaches such as LwF (Li & Hoiem, 2017), iCaRL (Rebuffi et al, 2017, BiC (Wu et al, 2019), LUCIR (Hou et al, 2019), Mnemonics (Liu et al, 2020b), TPCIL (Tao et al, 2020a), PODNet (Douillard et al, 2020), DDE (Hu et al, 2021) and AANets (Liu et al, 2021a). In particular, AANets and DDE are the recent strong approaches implemented on the backbones of LUCIR and PODNet, so we also implement ours on the two backbones.…”
Section: Methodsmentioning
confidence: 99%
“…Replay-based methods (Rebuffi et al, 2017;Shin et al, 2017) approximated and recovered the old data distribution. In particular, memory replay of representative old training samples (referred to as memory replay) can generally achieve the best performance in class-incremental learning (Liu et al, 2021a;Hu et al, 2021) and in numerous other continual learning scenarios, such as audio tasks (Ehret et al, 2020), few-shot (Tao et al, 2020b), semi-supervised (Wang et al, 2021a), and unsupervised continual learning (Khare et al, 2021).…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…Class Incremental Learning (CIL). Incremental learning aims to continuously learn by accumulating past knowledge [2,19,24]. Our work is conducted on CIL benchmarks, which need to learn a unified classifier that can recognize all the old and new classes combined.…”
Section: Related Workmentioning
confidence: 99%