2017
DOI: 10.48550/arxiv.1702.08658
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Towards Deeper Understanding of Variational Autoencoding Models

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
71
1

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
4
1

Relationship

0
9

Authors

Journals

citations
Cited by 73 publications
(75 citation statements)
references
References 2 publications
1
71
1
Order By: Relevance
“…Despite these advantages, followup works have identified a few important drawbacks of VAEs. The VAE objective is at the risk of posterior collapse -learning a latent space distribution which is independent of the input distribution if the KL term dominates the reconstruction term Zhao et al, 2017;. The poor sample qualities of VAE has been attributed to a mismatch between the prior (which is used for drawing samples) and the posterior Tomczak & Welling, 2018;Dai & Wipf, 2019;Bauer & Mnih, 2019).…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Despite these advantages, followup works have identified a few important drawbacks of VAEs. The VAE objective is at the risk of posterior collapse -learning a latent space distribution which is independent of the input distribution if the KL term dominates the reconstruction term Zhao et al, 2017;. The poor sample qualities of VAE has been attributed to a mismatch between the prior (which is used for drawing samples) and the posterior Tomczak & Welling, 2018;Dai & Wipf, 2019;Bauer & Mnih, 2019).…”
Section: Related Workmentioning
confidence: 99%
“…However, they come with some challenges. For instance, VAEs suffer from the posterior collapse problem Zhao et al, 2017;, and a mismatch between the posterior and prior distribution Tomczak & Welling, 2018;Dai & Wipf, 2019;Bauer & Mnih, 2019). GANs are known to have the mode collapse problem (Che et al, 2016;Dumoulin et al, 2016;Donahue et al, 2016) and optimization instability (Arjovsky & Bottou, 2017) due to their saddle point problem formulation.…”
Section: Introductionmentioning
confidence: 99%
“…a fixed Gaussian distribution), therefore the model distribution is absolutely continuous and maximum likelihood learning is thus well defined. However, common distributions such as natural images usually don't have Gaussian observational noise [46]. Therefore, we focus on modeling distributions that lie on a low-dimensional manifold.…”
Section: Related Workmentioning
confidence: 99%
“…Images reconstructed from encoder-decoder models are usually blurry. Blurry images are due to using the mean squared error (MSE) loss between the reconstructed and original image, implying the error between both images is Gaussian noise, which is not in the case in seismic images (Zhao et al, 2017).…”
Section: Adversarial Trainingmentioning
confidence: 99%