2019
DOI: 10.1561/2200000056
|View full text |Cite
|
Sign up to set email alerts
|

An Introduction to Variational Autoencoders

Abstract: Variational autoencoders provide a principled framework for learning deep latent-variable models and corresponding inference models.

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

2
603
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
2
2

Relationship

0
8

Authors

Journals

citations
Cited by 1,340 publications
(751 citation statements)
references
References 78 publications
2
603
0
Order By: Relevance
“…A good introduction can be found in Ref. [23]. In short, a VAE consists of a probabilistic encoder q(z|σ) and a probabilistic decoder p(σ|z), converting the sequence into a continuous multidimensional latent variable z and back.…”
Section: B Variational Auto-encodermentioning
confidence: 99%
“…A good introduction can be found in Ref. [23]. In short, a VAE consists of a probabilistic encoder q(z|σ) and a probabilistic decoder p(σ|z), converting the sequence into a continuous multidimensional latent variable z and back.…”
Section: B Variational Auto-encodermentioning
confidence: 99%
“…The dense layers are followed by two layers representing the mean and log variance of a normal distribution. This allows to model hidden influence factors in a way that emphasizes the mean while retaining variation, rather than reducing the networks loss by outputting the mean 19,20 . Furthermore, a constraint is placed on these two layers to minimize the Kullback-Leibler divergence to a Gaussian (0 + ; 1), where s is a location shift parameter:…”
Section: Vae Encodermentioning
confidence: 99%
“…Variational autoencoders (VAE) estimating the underlying probability distribution of the input data 19,20 have been successfully used in scRNA-Seq, with a special focus on the latent variable space for dimension reduction and clustering 21,22 . It is worth noting that the two previously mentioned algorithms are not mutually exclusive and concepts may be combined for specific applications (NMF 12 and AE 12,17 ), becoming more practical if the latent variable space takes specific cell type classes into account across training of the network model.…”
mentioning
confidence: 99%
“…We will adopt a probabilistic framework for latent-variable modeling of data [7], where a generative model p θ (x, z) for data x and latent variables z is assumed:…”
Section: Introductionmentioning
confidence: 99%
“…In principle, this could be used to form disentangled representations. However, model posterior is often intractable [7], and variational methods are used to estimate it.…”
Section: Introductionmentioning
confidence: 99%