2021
DOI: 10.1609/aaai.v35i3.16334
|View full text |Cite
|
Sign up to set email alerts
|

Few-Shot Lifelong Learning

Abstract: Many real-world classification problems often have classes with very few labeled training samples. Moreover, all possible classes may not be initially available for training, and may be given incrementally. Deep learning models need to deal with this two-fold problem in order to perform well in real-life situations. In this paper, we propose a novel Few-Shot Lifelong Learning (FSLL) method that enables deep learning models to perform lifelong/continual learning on few-shot data. Our method selects very few par… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
14
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 51 publications
(14 citation statements)
references
References 24 publications
(51 reference statements)
0
14
0
Order By: Relevance
“…Prototype modeling was also used to assign prototypes in the embedding space to reserve it for future incoming classes [62], or to use the average of new class embedding representations as a class prototype to replace classifiers [63]. Different methods addressed the problem by synthesizing features into a mixture of sub-spaces for incremental classes by using a VAE [7], or by adapting general deep learning architectures to enable a few parameters to be updated for every new set of novel incoming classes [30]. More recent approaches tried to combine features emerging from supervised and self-supervised models for boosting classifiers [1], or to calibrate distributions to avoid forgetting by retrieving distributions for old classes while estimating distributions for new classes [26].…”
Section: Few-shot Class-incremental Learningmentioning
confidence: 99%
See 2 more Smart Citations
“…Prototype modeling was also used to assign prototypes in the embedding space to reserve it for future incoming classes [62], or to use the average of new class embedding representations as a class prototype to replace classifiers [63]. Different methods addressed the problem by synthesizing features into a mixture of sub-spaces for incremental classes by using a VAE [7], or by adapting general deep learning architectures to enable a few parameters to be updated for every new set of novel incoming classes [30]. More recent approaches tried to combine features emerging from supervised and self-supervised models for boosting classifiers [1], or to calibrate distributions to avoid forgetting by retrieving distributions for old classes while estimating distributions for new classes [26].…”
Section: Few-shot Class-incremental Learningmentioning
confidence: 99%
“…We also introduce prompt regularization to improve performance and prevent forgetting. Our experimental results demonstrate that CPE-CLIP significantly improves FSCIL performance compared to state-of-the-art proposals while also drastically reducing the number of learnable parameters and training costs.Recent research has focused on solving these problems through various approaches, such as meta-learning [57, 34], regularization techniques [30], or knowledge distillation [38,6,62]. These methods have shown promising results in achieving incremental learning over time with a limited amount of data available.…”
mentioning
confidence: 99%
See 1 more Smart Citation
“…Recently, OCL for embedded devices [4], [8], [23], [24] further restricts the scope to parameterefficient updates with unitary batch size due to low-resource devices. In parallel, few-shot CL (FS-CL) [25], [26], [27], [28], [29] and FS-OCL [10] train a model to recognize new classes based on few labelled data only, via fast and efficient model updates with minimal human effort. Along all these definitions, three main scenarios have been considered [30]: 1) class-incremental (CI) where new classes are introduced to models over time; 2) domain-incremental (DI) where the same problem is learned in different contexts (e.g., with a fixed set of classes); and 3) task-incremental (TI), where new distinct tasks are presented to models.…”
Section: Introductionmentioning
confidence: 99%
“…It can also arrest the catastrophic forgetting of the previously learnt classes to a certain extent. As is common in most FSCIL approaches [44,25], we propose to freeze the feature extractor and learn only the SC at each incremental step. In order to compute features from the base classes which generalize to unseen classes, inspired by recent works [22,47], we use self-supervision along with SC giving our final S3C framework.…”
Section: Introductionmentioning
confidence: 99%