2019
DOI: 10.48550/arxiv.1910.14481
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Continual Unsupervised Representation Learning

Dushyant Rao,
Francesco Visin,
Andrei A. Rusu
et al.

Abstract: Continual learning aims to improve the ability of modern learning systems to deal with non-stationary distributions, typically by attempting to learn a series of tasks sequentially. Prior art in the field has largely considered supervised or reinforcement learning tasks, and often assumes full knowledge of task labels and boundaries. In this work, we propose an approach (CURL) to tackle a more general problem that we will refer to as unsupervised continual learning. The focus is on learning representations wit… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
15
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 10 publications
(16 citation statements)
references
References 25 publications
(41 reference statements)
0
15
0
Order By: Relevance
“…We use a Bit-Swap with 8 hierarchical latent variables with a Markov chain structure for each level of codes. In our preliminary experiments, we find the Resnet-32 used in previous replay-based methods [26,36,39] causes underfitting of our DRR in some cases. We use a ResNet-18 [13] used in [25] as a classifier (see Appendix for further discussion).…”
Section: Methodsmentioning
confidence: 89%
See 2 more Smart Citations
“…We use a Bit-Swap with 8 hierarchical latent variables with a Markov chain structure for each level of codes. In our preliminary experiments, we find the Resnet-32 used in previous replay-based methods [26,36,39] causes underfitting of our DRR in some cases. We use a ResNet-18 [13] used in [25] as a classifier (see Appendix for further discussion).…”
Section: Methodsmentioning
confidence: 89%
“…For DRR and IB-DRR * we only save codes for seen classes. For IB-DRR, we follow [26,36,39] and save additionally 20 raw exemplars per class for seen classes.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Lifelong (or continual) learning describes the scenario in which new tasks arrive sequentially and should be incorporated into the current model, retaining previous knowledge (Parisi et al, 2019). Approaches to lifelong learning are mainly aimed to mitigate catastrophic forgetting (Rao et al, 2019;Ramapuram et al, 2020;Ye and Bors, 2020). According to Parisi et al (2019), there are three main approaches to lifelong learning: (i) retraining with regularisation; (ii) network expansion; (iii) selective network retraining and expansion.…”
Section: Related Workmentioning
confidence: 99%
“…Although large-scale datasets can cover many scenarios, when new classes data appear, the supervised model still cannot fit them well. Some works try to improve the supervised CIL by using unsupervised methods [7,25]. However, they still base on supervised labels, and the gap between them and iCaRL is tiny.…”
Section: Self-supervised Learningmentioning
confidence: 99%