2018
DOI: 10.48550/arxiv.1802.06847
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Distribution Matching in Variational Inference

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

2
48
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 24 publications
(50 citation statements)
references
References 0 publications
2
48
0
Order By: Relevance
“…As discussed in Section 2.1, issue i) is compromising the VAE framework in any case, as reported in several works (Hoffman & Johnson, 2016;Rosca et al, 2018;Dai & Wipf, 2019). To fix this, some works extend the VAE objective by encouraging the aggregated posterior to match p(z) (Tolstikhin et al, 2017) or by utilizing more complex priors (Kingma et al, 2016;Tomczak & Welling, 2018;Bauer & Mnih, 2019).…”
Section: Ex-post Density Estimationmentioning
confidence: 83%
“…As discussed in Section 2.1, issue i) is compromising the VAE framework in any case, as reported in several works (Hoffman & Johnson, 2016;Rosca et al, 2018;Dai & Wipf, 2019). To fix this, some works extend the VAE objective by encouraging the aggregated posterior to match p(z) (Tolstikhin et al, 2017) or by utilizing more complex priors (Kingma et al, 2016;Tomczak & Welling, 2018;Bauer & Mnih, 2019).…”
Section: Ex-post Density Estimationmentioning
confidence: 83%
“…While D2C is a special case of VAE, we argue that D2C is non-trivial in the sense that it addresses a long-standing problem in VAE methods [87,84], namely the mismatch between the prior distribution p θ (z) and the aggregate (approximate) posterior distribution q φ (z) := E p data (x) [q φ (z|x)]. A mismatch could create "holes" [76,41,3] in the prior that the aggregate posterior fails to cover during training, resulting in worse sample quality, as many latent variables used during generation are likely to never have been trained on. We formalize this notion in the following definition.…”
Section: D2c Models Address Latent Posterior Mismatch In Vaesmentioning
confidence: 99%
“…So, to sample from our autoencoder, we need an additional mechanism to sample z sem ∈ R d from the latent distribution. While VAE is an appealing choice for this task, balancing between retaining rich information in the latent code and maintaining the sampling quality in VAE is hard [27,32,33,35]. GAN is another choice, though it complicates training stability, which is one main strength of DPMs.…”
Section: Sampling With Diffusion Autoencodersmentioning
confidence: 99%