2020
DOI: 10.48550/arxiv.2006.10597
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Variational Autoencoder with Learned Latent Structure

Marissa C. Connor,
Gregory H. Canal,
Christopher J. Rozell

Abstract: The manifold hypothesis states that high-dimensional data can be modeled as lying on or near a low-dimensional, nonlinear manifold. Variational Autoencoders (VAEs) approximate this manifold by learning mappings from low-dimensional latent vectors to high-dimensional data while encouraging a global structure in the latent space through the use of a specified prior distribution. When this prior does not match the structure of the true data manifold, it can lead to a less accurate model of the data. To resolve th… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
4
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

1
0

Authors

Journals

citations
Cited by 1 publication
(4 citation statements)
references
References 13 publications
(27 reference statements)
0
4
0
Order By: Relevance
“…The Laplacian prior with a fixed threshold performs the best among variational methods on objective (9). We note that methods that learn the threshold parameters tend to increase the number of active latent features, leading to an increase in L1 penalty in the validation loss.…”
Section: Linear Sparse Coding Performancementioning
confidence: 99%
See 3 more Smart Citations
“…The Laplacian prior with a fixed threshold performs the best among variational methods on objective (9). We note that methods that learn the threshold parameters tend to increase the number of active latent features, leading to an increase in L1 penalty in the validation loss.…”
Section: Linear Sparse Coding Performancementioning
confidence: 99%
“…Rather than give each sample equal probability, we introduce a new sampling strategy motivated by the approximation to expectations utilized in [37,9]. In these works, it is observed that the selected prior distribution concentrates most of its probability mass around the maximum value.…”
Section: Max Elbo Samplingmentioning
confidence: 99%
See 2 more Smart Citations