2020
DOI: 10.48550/arxiv.2003.05856
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Online Fast Adaptation and Knowledge Accumulation: a New Approach to Continual Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
13
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
8

Relationship

2
6

Authors

Journals

citations
Cited by 10 publications
(13 citation statements)
references
References 0 publications
0
13
0
Order By: Relevance
“…The agents can even extrapolate far beyond the training distribution to recall words after subsequent learning episodes without ever being trained to do so. Thus better memory systems can help with the challenge of bridging from meta-learning to continual learning, which is receiving increasing interest [12,16,43,4,36].…”
Section: Discussionmentioning
confidence: 99%
“…The agents can even extrapolate far beyond the training distribution to recall words after subsequent learning episodes without ever being trained to do so. Thus better memory systems can help with the challenge of bridging from meta-learning to continual learning, which is receiving increasing interest [12,16,43,4,36].…”
Section: Discussionmentioning
confidence: 99%
“…In contrast, our method does not even attempt to find the task boundaries, but directly adapts without them. A number of related works also address continual learning via meta-learning, but with the aim of minimizing catastrophic forgetting Gupta et al (2020); Caccia et al (2020). Our aim is not to address catastrophic forgetting.…”
Section: Related Workmentioning
confidence: 99%
“…Continual-meta learning focuses on fast learning and remembering [25,35,33,41], often emphasising the online performance on OOD tasks [14]. As argued by Jerfel et al [41] modularity can be useful in this setting to minimize interference between tasks.…”
Section: Related Workmentioning
confidence: 99%
“…The goal of this setting is to construct a learner that quickly (i.e. within a few steps of gradient descent) learns tasks from new environments and relearns (or remembers) tasks from previously learned environments [41,14,34]. Methods applicable in this setting usually rely on gradient-based meta-learning strategies based on MAML [24].…”
Section: F Continual Meta-learningmentioning
confidence: 99%