2021
DOI: 10.48550/arxiv.2106.08085
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Natural continual learning: success is a journey, not (just) a destination

Abstract: Biological agents are known to learn many different tasks over the course of their lives, and to be able to revisit previous tasks and behaviors with little to no loss in performance. In contrast, artificial agents are prone to 'catastrophic forgetting' whereby performance on previous tasks deteriorates rapidly as new ones are acquired. This shortcoming has recently been addressed using methods that encourage parameters to stay close to those used for previous tasks. This can be done by (i) using specific para… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 29 publications
0
3
0
Order By: Relevance
“…These rotations This information is often explicitly given in the form of task labels (Rusu et al 2016, Aljundi et al 2017, Li & Hoiem 2017, Masse et al 2018, Zeng et al 2019, or sometimes it is more implicit, for example, when only the boundaries between tasks are sign posted, but the identities of individual tasks are not provided (Kirkpatrick et al 2017, Lopez-Paz & Ranzato 2017, Rebuffi et al 2017, Shin et al 2017, Zenke et al 2017. Furthermore, to ensure that each task is learned equally well, several approaches also require networks to be provided with an equal amount of training for each task (Lopez-Paz & Ranzato 2017, Rebuffi et al 2017, Shin et al 2017, Kao et al 2021a). Arguably, this is an unreasonable requirement for a lifelong learning algorithm, as, in general, an equal amount of experience per task cannot be guaranteed in the wild and should not be expected.…”
Section: Continual Learningmentioning
confidence: 99%
“…These rotations This information is often explicitly given in the form of task labels (Rusu et al 2016, Aljundi et al 2017, Li & Hoiem 2017, Masse et al 2018, Zeng et al 2019, or sometimes it is more implicit, for example, when only the boundaries between tasks are sign posted, but the identities of individual tasks are not provided (Kirkpatrick et al 2017, Lopez-Paz & Ranzato 2017, Rebuffi et al 2017, Shin et al 2017, Zenke et al 2017. Furthermore, to ensure that each task is learned equally well, several approaches also require networks to be provided with an equal amount of training for each task (Lopez-Paz & Ranzato 2017, Rebuffi et al 2017, Shin et al 2017, Kao et al 2021a). Arguably, this is an unreasonable requirement for a lifelong learning algorithm, as, in general, an equal amount of experience per task cannot be guaranteed in the wild and should not be expected.…”
Section: Continual Learningmentioning
confidence: 99%
“…Particular methods such as SI [10] and EWC [3] vary in the regularization functions. In several works, such as NCL [11], authors combine weights regularization with other techniques, such as gradient projection [12].…”
Section: Related Workmentioning
confidence: 99%
“…Motion is a sequential data type consisting of individual poses, thus a sequential model is required. So far, only a few continual learning studies have tested or developed models for tasks involving sequential or time-series data like text or video, e.g., [13,14,15,16,17,18,19].…”
Section: Introductionmentioning
confidence: 99%