2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2020
DOI: 10.1109/cvpr42600.2020.01360
|View full text |Cite
|
Sign up to set email alerts
|

iTAML: An Incremental Task-Agnostic Meta-learning Approach

Abstract: Humans can continuously learn new knowledge as their experience grows. In contrast, previous learning in deep neural networks can quickly fade out when they are trained on a new task. In this paper, we hypothesize this problem can be avoided by learning a set of generalized parameters, that are neither specific to old nor new tasks. In this pursuit, we introduce a novel meta-learning approach that seeks to maintain an equilibrium between all the encountered tasks. This is ensured by a new meta-update rule whic… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
89
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 121 publications
(105 citation statements)
references
References 16 publications
0
89
0
Order By: Relevance
“…Compared with other few-shot detection algorithms, the advantage of ONCE is that after training on the basic dataset, the new small sample dataset can be directly used for inference, and the contents of the basic dataset will not be forgotten in this process. iTAML [ 118 ] is also an incremental learning algorithm designed based on meta-learning, but it focuses on solving classification tasks.…”
Section: Methods Descriptionmentioning
confidence: 99%
“…Compared with other few-shot detection algorithms, the advantage of ONCE is that after training on the basic dataset, the new small sample dataset can be directly used for inference, and the contents of the basic dataset will not be forgotten in this process. iTAML [ 118 ] is also an incremental learning algorithm designed based on meta-learning, but it focuses on solving classification tasks.…”
Section: Methods Descriptionmentioning
confidence: 99%
“…This simple method, known as experience replay, has been explored and shown to be effective. [5][6][7][8]11,27,50 In this work we aim to go one step further and investigate the role of explanations in continual learning, particularly on mitigating forgetting and change of model explanations.…”
Section: Remembering For the Right Reasonsmentioning
confidence: 99%
“…An active line of research in continual learning explores the effectiveness of using small memory budgets to store data points from the training set, [5][6][7][8] gradients, 9 or storing an online generative model that can fake them later. 10 Memory has been also exploited in the form of accommodating space for architecture growth and storage to fully recover the old performance when needed.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…In effect, the ability to generalize across tasks is at the core of metalearning. In this context, [23], [24], [44] propose different metalearning strategies to tackle a continual learning scenario. The resulting techniques reduce task interference by avoiding conflicts between current and future gradient directions to update weights.…”
Section: B Meta-learningmentioning
confidence: 99%