2019
DOI: 10.1609/aaai.v33i01.33015885
|View full text |Cite
|
Sign up to set email alerts
|

InfoVAE: Balancing Learning and Inference in Variational Autoencoders

Abstract: A key advance in learning generative models is the use of amortized inference distributions that are jointly trained with the models. We find that existing training objectives for variational autoencoders can lead to inaccurate amortized inference distributions and, in some cases, improving the objective provably degrades the inference quality. In addition, it has been observed that variational autoencoders tend to ignore the latent variables when combined with a decoding distribution that is too flexible. We … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

6
219
0

Year Published

2019
2019
2021
2021

Publication Types

Select...
5
2
2

Relationship

0
9

Authors

Journals

citations
Cited by 198 publications
(238 citation statements)
references
References 3 publications
6
219
0
Order By: Relevance
“…In the case of integrating breast-cancer data we found that the choice of an appropriate regularisation when training the autoencoders is imperative. Our results show that the integrative VAEs yield better (and more disentangled) representations when MMD is employed, which also corresponds to findings from other studies (Zhao et al, 2017;Chen et al, 2018). Moreover, we found that giving a moderately large weight to this regularisation term further improves the quality of the learned representations.…”
Section: Discussionsupporting
confidence: 92%
See 1 more Smart Citation
“…In the case of integrating breast-cancer data we found that the choice of an appropriate regularisation when training the autoencoders is imperative. Our results show that the integrative VAEs yield better (and more disentangled) representations when MMD is employed, which also corresponds to findings from other studies (Zhao et al, 2017;Chen et al, 2018). Moreover, we found that giving a moderately large weight to this regularisation term further improves the quality of the learned representations.…”
Section: Discussionsupporting
confidence: 92%
“…In response, Higgins et al (2017) control the influence of the disentanglement factor using a parameter β. Moreover, some approaches have experimented with different regularisation terms, such as the InfoVAE (Zhao et al, 2017), where Maximum Mean Discrepancy (MMD) is employed as an alternative to KL divergence. MMD (Gretton et al, 2007) is based on the concept that two distributions are identical if, and only if, all their moments are identical.…”
Section: Variational Autoencoders For Cancer Data Integrationmentioning
confidence: 99%
“…(iii) The loss L v is a term calculated using the Maximum Mean Discrepancy (MMD) [25], as explained in the following; notice that the idea of combining VAE and MMD was used for the first time in [26], where the authors proved that infoVAE (VAE using MMD) is fast to train, stable and leads to a better learning of the features if compared to the traditional evidence lower bound (ELBO) [27] criterion used in VAEs. The basic idea of MMD is that two distributions are identical if and only if their moments are the same.…”
Section: Figurementioning
confidence: 99%
“…Perhaps the closest approach to ours, are those of learning latent representations for data reconstruction. In variational autoencoders (VAEs) [37], [38], [39], [40], a continuous latent representation space is learned from the inputs, that can then be used to reconstruct inputs or generate new data that follow the same distribution as the data in the training set. In [5], the authors present a new way of training VAEs to learn discrete latent space representations, which naturally leads to a compression algorithm, since continuous (or full-precision) inputs can be mapped to discrete latent representations typically using fewer bits.…”
Section: Decision Stumpsmentioning
confidence: 99%