The platform will undergo maintenance on Sep 14 at about 7:45 AM EST and will be unavailable for approximately 2 hours.
2020
DOI: 10.1016/j.neucom.2020.02.115
|View full text |Cite
|
Sign up to set email alerts
|

Lifelong generative modeling

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
62
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
2

Relationship

1
5

Authors

Journals

citations
Cited by 58 publications
(80 citation statements)
references
References 30 publications
0
62
0
Order By: Relevance
“…However, both models from [64,67] lack an image inference procedure and Lifelong GAN would need to load all previously learnt data for the generation task. Approaches employing both generative and inference mechanisms are based on the VAE framework [1,50]. However, these approaches have degenerating performance when learning high-dimensional data, due to lacking a powerful generator.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…However, both models from [64,67] lack an image inference procedure and Lifelong GAN would need to load all previously learnt data for the generation task. Approaches employing both generative and inference mechanisms are based on the VAE framework [1,50]. However, these approaches have degenerating performance when learning high-dimensional data, due to lacking a powerful generator.…”
Section: Related Workmentioning
confidence: 99%
“…For comparison we consider LGM [50] and VAEGAN [43], which is one of the best known hybrid models enabled with an inference mechanism. We implement VAEGAN using GRM in order to prevent forgetting.…”
Section: Lifelong Unsupervised Learningmentioning
confidence: 99%
See 1 more Smart Citation
“…These weights are modified separately during the training (like an annealing procedure) forcing the model to encode information both in the discrete and continuous variables. Moreover, the same model is also used under the setting of continual learning [ 13 ], where a mutual information regularizer is added in order to overcome this issue.…”
Section: Vae With Continuous and Discrete Componentsmentioning
confidence: 99%
“…Unlike in the standard VAE, we can sample data from specific mixture components at will. This is particularly critical if the generative power of VAEs shall be used in conjunction with methods requiring the identification of the distributional components, such as in continual learning [ 13 , 14 ].…”
Section: Introductionmentioning
confidence: 99%