2020
DOI: 10.1007/978-3-030-58565-5_46
|View full text |Cite
|
Sign up to set email alerts
|

Learning Latent Representations Across Multiple Data Domains Using Lifelong VAEGAN

Abstract: The problem of catastrophic forgetting occurs in deep learning models trained on multiple databases in a sequential manner. Recently, generative replay mechanisms (GRM), have been proposed to reproduce previously learned knowledge aiming to reduce the forgetting. However, such approaches lack an appropriate inference model and therefore can not provide latent representations of data. In this paper, we propose a novel lifelong learning approach, namely the Lifelong VAEGAN (L-VAEGAN), which not only induces a po… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
31
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
3
2

Relationship

5
3

Authors

Journals

citations
Cited by 48 publications
(41 citation statements)
references
References 21 publications
0
31
0
Order By: Relevance
“…Recently, there have been some attempts to learn crossdomain representations under the lifelong learning by introducing an environment-dependent mask that specifies a subset of generative factors [16], or by proposing a teacher-student lifelong learning framework [15] and a hybrid model [36] based on Generative Adversarial Nets (GANs) [37] and VAEs. The models proposed in [15], [16], [36] are based on the Generative Replay Mechanisms (GRM) aiming to overcome forgetting. However, these methods suffer from poor performance when considering complex data.…”
Section: Related Research Studiesmentioning
confidence: 99%
“…Recently, there have been some attempts to learn crossdomain representations under the lifelong learning by introducing an environment-dependent mask that specifies a subset of generative factors [16], or by proposing a teacher-student lifelong learning framework [15] and a hybrid model [36] based on Generative Adversarial Nets (GANs) [37] and VAEs. The models proposed in [15], [16], [36] are based on the Generative Replay Mechanisms (GRM) aiming to overcome forgetting. However, these methods suffer from poor performance when considering complex data.…”
Section: Related Research Studiesmentioning
confidence: 99%
“…Lately, the likelihood estimation as a regularization term was shown to stabilize adversarial distribution matching [3]. However, these methods only focus on improving the generation capability and do not design suitable objective functions for inducing disentangled representations.…”
Section: Background and Related Workmentioning
confidence: 99%
“…Such data sets are assumed to be semantically distinct and to represent different categories of data characteristics. Learning disentangled representations that may capture semantic meaningful information can allow to explicitly edit images and is useful for a variety of tasks [1,2,3]. Enabling disentangled representations can overcome overfitting during the training, leading to better generalization in models, [4].…”
Section: Introductionmentioning
confidence: 99%
“…Moreover, existing GAN based lifelong approaches [11] can not learn inference models, which prevent their usefulness for a wide range of applications. In contrast, VAE based lifelong approaches [12] are able to capture crossdomain representations over several tasks but lead to poor performance when learning databases of high complexity, given that VAEs used as generative replay networks tend to produce rather blurred images. Another category of related works is based on the coupled [13] or dual generative models [14].…”
Section: Related Workmentioning
confidence: 99%
“…Firstly, LAKD does not require to load previously learnt data samples [7] while its memory size does not change as the number of tasks increases. Secondly, it does not require to preserve the model's parameters or even a snapshot of these after each task switch, as we have in other generative replay methods [12].…”
Section: Lifelong Twin Generative Adversarial Network (Lt-gans)mentioning
confidence: 99%