2018
DOI: 10.48550/arxiv.1812.09111
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Generative Models from the perspective of Continual Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
15
0

Year Published

2018
2018
2020
2020

Publication Types

Select...
6

Relationship

2
4

Authors

Journals

citations
Cited by 7 publications
(15 citation statements)
references
References 0 publications
0
15
0
Order By: Relevance
“…We do not compare model-based methods, because databased methods are known to outperform them in classincremental learning [22,41], and they are orthogonal to data-based methods, such that they can potentially be combined with our approaches for better performance [15]. Datasets.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…We do not compare model-based methods, because databased methods are known to outperform them in classincremental learning [22,41], and they are orthogonal to data-based methods, such that they can potentially be combined with our approaches for better performance [15]. Datasets.…”
Section: Methodsmentioning
confidence: 99%
“…Therefore, the amount of knowledge kept by knowledge distillation depends on the degree of similarity between the data distribution used to learn the previous tasks in the previous stages and the one used to distill the knowledge in the later stages. To guarantee to have a certain amount of similar data, some prior works [3,30,33] reserved a small amount of memory to keep a coreset, and others [22,32,38,41,42] trained a generative model and replay the generated data when training a new model. Note that the model-based and data-based approaches are orthogonal in most cases, thus they can be combined for better performance [15].…”
Section: Related Workmentioning
confidence: 99%
“…Other approaches using distillation loss include the encoder-based method (Rannen et al 2017) and incremental moment matching (Lee et al 2017). Some of the recent papers also consider generating data from previous tasks using a deep generative model (Lesort et al 2018;Shin et al 2017;van de Ven and Tolias 2018). Then, the model for the main task can be updated in a multi-task learning fashion using both the generated data and the data of the new task.…”
Section: Related Workmentioning
confidence: 99%
“…While most work in CL is focused on discriminative models, there is a recent interest in CL approaches for generative models (Achille et al, 2018;Nguyen et al, 2017). Proposed methods often rely on using generated samples to avoid forgetting (Wu et al, 2018;Lesort et al, 2018a). This technique, which we use in this paper, is termed Generative Replay.…”
Section: Continual State Representation Learning For Rlmentioning
confidence: 99%
“…(Graves et al, 2018) and others) that are widely used for SRL. Previous work have shown that VAEs can learn continually using generated samples from previous tasks, a method called Generative Replay (Lesort et al, 2018a). S-TRIGGER uses Generative Replay to remember information relative to previously encountered environments.…”
Section: Introductionmentioning
confidence: 99%