2019
DOI: 10.48550/arxiv.1905.04982
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Learning Hierarchical Priors in VAEs

Abstract: We propose to learn a hierarchical prior in the context of variational autoencoders to avoid the over-regularisation resulting from a standard normal prior distribution. To incentivise an informative latent representation of the data by learning a rich hierarchical prior, we formulate the objective function as the Lagrangian of a constrainedoptimisation problem and propose an optimisation algorithm inspired by Taming VAEs. We introduce a graph-based interpolation method, which shows that the topology of the le… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2021
2021
2021
2021

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 9 publications
0
1
0
Order By: Relevance
“…Hierarchical modeling in the Bayesian framework has been successful to design the form of the prior (Daumé III, 2009;Zhao et al, 2017;Klushyn et al, 2019;Wang & Van Hoof, 2020) and posterior distributions (Ranganath et al, 2016;Krueger et al, 2017;Zhen et al, 2020) based on many observations. It allows the latent variable to follow a complicated distribution and forms a highly flexible approximation (Krueger et al, 2017).…”
Section: Related Workmentioning
confidence: 99%
“…Hierarchical modeling in the Bayesian framework has been successful to design the form of the prior (Daumé III, 2009;Zhao et al, 2017;Klushyn et al, 2019;Wang & Van Hoof, 2020) and posterior distributions (Ranganath et al, 2016;Krueger et al, 2017;Zhen et al, 2020) based on many observations. It allows the latent variable to follow a complicated distribution and forms a highly flexible approximation (Krueger et al, 2017).…”
Section: Related Workmentioning
confidence: 99%