2016
DOI: 10.48550/arxiv.1606.04934
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Improving Variational Inference with Inverse Autoregressive Flow

Abstract: The framework of normalizing flows provides a general strategy for flexible variational inference of posteriors over latent variables. We propose a new type of normalizing flow, inverse autoregressive flow (IAF), that, in contrast to earlier published flows, scales well to high-dimensional latent spaces. The proposed flow consists of a chain of invertible transformations, where each transformation is based on an autoregressive neural network. In experiments, we show that IAF significantly improves upon diagona… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
103
0

Year Published

2017
2017
2024
2024

Publication Types

Select...
5
4
1

Relationship

0
10

Authors

Journals

citations
Cited by 60 publications
(103 citation statements)
references
References 15 publications
0
103
0
Order By: Relevance
“…This estimation assumes Standard Normal priors for the likelihood of the latent data, as described in appendix A. There is a great deal of ongoing research into methods to improve the likelihood estimate by changing the latent space priors or improving the posterior approximations of the encoder[2,48,82,83].…”
mentioning
confidence: 99%
“…This estimation assumes Standard Normal priors for the likelihood of the latent data, as described in appendix A. There is a great deal of ongoing research into methods to improve the likelihood estimate by changing the latent space priors or improving the posterior approximations of the encoder[2,48,82,83].…”
mentioning
confidence: 99%
“…To estimate the density of the dataset, it is proposed to use normalizing flows, which are primarily applied to generative modeling [48][49][50][51][52][53] and are increasingly being used for scientific applications [54,55]. Given a dataset, generative modeling attempts to create new data points that were previously unseen but are distributed like the original dataset.…”
Section: Probability Map Estimationmentioning
confidence: 99%
“…This results in an additional loss component and a specific estimator for the training algorithm called the Stochastic Gradient Variational Bayes (SGVB) estimator. Researchers have incorporated some more sophisticated posteriors q(z|x) to extend variational autoencoder [14], [21], [23].…”
Section: B Approximate Inferencementioning
confidence: 99%