2019
DOI: 10.1007/978-3-030-30508-6_47
|View full text |Cite
|
Sign up to set email alerts
|

Disentangling Latent Factors of Variational Auto-encoder with Whitening

Abstract: After deep generative models were successfully applied to image generation tasks, learning disentangled latent variables of data has become a crucial part of deep generative model research. Many models have been proposed to learn an interpretable and factorized representation of latent variable by modifying their objective function or model architecture. To disentangle the latent variable, some models show lower quality of reconstructed images and others increase the model complexity which is hard to train. In… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 6 publications
(3 citation statements)
references
References 7 publications
0
3
0
Order By: Relevance
“…Third, it would be interesting to investigate fine-grained control of solution redundancy across containers, to reduce or dynamically tune their number. In a future paper, we will describe how another post-processing operation commonly used in Machine Learning known as "whitening" [23,27] could be used to further decorrelate the FD space and reduce the number of redundancies.…”
Section: Discussionmentioning
confidence: 99%
“…Third, it would be interesting to investigate fine-grained control of solution redundancy across containers, to reduce or dynamically tune their number. In a future paper, we will describe how another post-processing operation commonly used in Machine Learning known as "whitening" [23,27] could be used to further decorrelate the FD space and reduce the number of redundancies.…”
Section: Discussionmentioning
confidence: 99%
“…Other examples of disentanglement algorithms include information-theoretic methods in GANs [12], latent whitening [23], covariance penalization [35], and Bayesian hyperpriors [2]. A number of techniques also utilize known groupings or discrete labels of the data [28,8,55,22].…”
Section: Latent Disentanglement In Generative Modelsmentioning
confidence: 99%
“…Second, the latent sentence should not be any arbitrary sentence, but it should be a realistic sentence in that language domain. Note that in the conventional VAE models, the latent space is modeled to have an isotropic Gaussian distribution without any other regularizations on the space, even though there are several works that regularize the space to disentangle the dimensions (Higgins et al, 2017;Kim and Mnih, 2018;Hahn and Choi, 2019). If we do not address this issue, the inference model can be trained to mistranslate the monolingual input sentence focusing on easy reconstruction by the reconstruction model.…”
Section: Introductionmentioning
confidence: 99%