2021
DOI: 10.48550/arxiv.2104.03047
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Few-Shot Incremental Learning with Continually Evolved Classifiers

Abstract: Few-shot class-incremental learning (FSCIL) aims to design machine learning algorithms that can continually learn new concepts from a few data points, without forgetting knowledge of old classes. The difficulty lies in that limited data from new classes not only lead to significant overfitting issues but also exacerbate the notorious catastrophic forgetting problems. Moreover, as training data come in sequence in FSCIL, the learned classifier can only provide discriminative information in individual sessions, … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(3 citation statements)
references
References 36 publications
(40 reference statements)
0
3
0
Order By: Relevance
“…These models do not consider the Few-shot learning setting. • Few-shot Class-incremental Learning: CEC [47]. It is one of the state-of-the-art methods but is primarily for the image domain, so we replace the encoder with a GNN encoder.…”
Section: Experiments Settingsmentioning
confidence: 99%
See 1 more Smart Citation
“…These models do not consider the Few-shot learning setting. • Few-shot Class-incremental Learning: CEC [47]. It is one of the state-of-the-art methods but is primarily for the image domain, so we replace the encoder with a GNN encoder.…”
Section: Experiments Settingsmentioning
confidence: 99%
“…This may engender two potential problems: (1) on the one hand, if trained on all the data samples naively, the learned graph model could be substantially biased towards those base classes with significantly more nodes, resulting in the inertia to learn new node classes [37]. Moreover, to retain the existing knowledge, many of the methods from few-shot incremental learning [31,46,47] use a fixed feature encoder, which will not be updated after being pre-trained on base classes. Such a design will also exacerbate the difficulty of adapting the model to the new incremental learning tasks; (2) on the other hand, as the node classes from new tasks only have few-labeled samples, imposing the graph learning model to focus on new tasks will easily lead to overfitting to those new tasks and erase the existing knowledge for previously learned classes, which is known as Catastrophic Forgetting [8,14,18].…”
Section: Introductionmentioning
confidence: 99%
“…However, the trade-off between the plasticity of learning and the stability of memory inevitably faces the challenge of catastrophic forgetting. To address this problem, there are several typical approaches to alleviate it, which can be roughly categorized into four major types: (1) model structurebased approaches (CEC [1] ); (2) playback-based approaches (LwF [2] , iCarl [3]) ; (3) regularization-based approaches (podNet [4] ); (4) unsupervised category incremental methods (LUMP [5] ). Although these methods have been very successful, (1)(2)(3) are all based on supervised learning research.…”
Section: Introductionmentioning
confidence: 99%