2021
DOI: 10.48550/arxiv.2101.07295
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

The Surprising Positive Knowledge Transfer in Continual 3D Object Shape Reconstruction

Abstract: Continual learning is known for suffering from catastrophic forgetting, a phenomenon where earlier learned concepts are forgotten at the expense of more recent samples. In this work, we challenge the assumption that continual learning is inevitably associated with catastrophic forgetting by presenting a set of tasks that surprisingly do not suffer from catastrophic forgetting when learned continually. We attempt to provide an insight into the property of these tasks that make them robust to catastrophic forget… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2021
2021
2021
2021

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 34 publications
(88 reference statements)
0
1
0
Order By: Relevance
“…The spectrum of approaches between the aforementioned two extremes is best captured by the so-called "stability-plasticity" dilemma [29]. The decision to use autoencoders as part of the proposed model is dictated by firstly, its unsupervised nature, and secondly it has been shown that autoencoders are comparatively resilient to catastrophic forgetting [45]. For CL, since the bottleneck layer is the most sensitive part of our model for a given task, we create a separate latent layer for each class and the aforementioned process is repeated.…”
Section: Introductionmentioning
confidence: 99%
“…The spectrum of approaches between the aforementioned two extremes is best captured by the so-called "stability-plasticity" dilemma [29]. The decision to use autoencoders as part of the proposed model is dictated by firstly, its unsupervised nature, and secondly it has been shown that autoencoders are comparatively resilient to catastrophic forgetting [45]. For CL, since the bottleneck layer is the most sensitive part of our model for a given task, we create a separate latent layer for each class and the aforementioned process is repeated.…”
Section: Introductionmentioning
confidence: 99%