2013
DOI: 10.48550/arxiv.1312.6114
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Auto-Encoding Variational Bayes

Abstract: How can we perform efficient inference and learning in directed probabilistic models, in the presence of continuous latent variables with intractable posterior distributions, and large datasets? We introduce a stochastic variational inference and learning algorithm that scales to large datasets and, under some mild differentiability conditions, even works in the intractable case. Our contributions is two-fold. First, we show that a reparameterization of the variational lower bound yields a lower bound estimato… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

8
5,304
1
6

Year Published

2019
2019
2023
2023

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 4,174 publications
(5,965 citation statements)
references
References 8 publications
8
5,304
1
6
Order By: Relevance
“…This approach renders our design space amenable to the construction of low-dimensional and robust surrogate models and the use of off-the-shelf Bayesian optimization algorithms. 60,61 We learn an appropriate latent space embedding in a data-driven fashion by training a regularized autoencoder (RAE), 62 a deterministic adaptation of the variational autoencoder (VAE) architecture, 63 over the corpus of 124,327 CG molecular graphs (Fig. 2b).…”
Section: Chemical Space Embeddingmentioning
confidence: 99%
“…This approach renders our design space amenable to the construction of low-dimensional and robust surrogate models and the use of off-the-shelf Bayesian optimization algorithms. 60,61 We learn an appropriate latent space embedding in a data-driven fashion by training a regularized autoencoder (RAE), 62 a deterministic adaptation of the variational autoencoder (VAE) architecture, 63 over the corpus of 124,327 CG molecular graphs (Fig. 2b).…”
Section: Chemical Space Embeddingmentioning
confidence: 99%
“…Born and Manica [ 42 ] predicted that multimodal deep learning chemistry using disparate sources to generate molecules would be the next challenge in DL in the near future. The Variational Autoencoder (VAE) method [ 44 ] was developed as an algorithm to learn continuous molecular representations. This method was used by Gomez-Bombarelli et al for an automatic chemical design using a data-driven continuous representation of molecules.…”
Section: Deep Learning For Processing Molecular Data In Drug Designmentioning
confidence: 99%
“…On fMRI data these methods do not make much sense as brain images are not invariant to such transformations. More advanced techniques [29] are based on generative models such as GANs or variational auto-encoders [12]. Although GAN-based method are powerful they are slow and difficult to train [2].…”
Section: Related Workmentioning
confidence: 99%
“…Our method is not an adversarial procedure. However it relates to other powerful generative models such as variational auto-encoders [12] with which it shares strong similarities. Indeed the analog of the encoding function in the variational auto-encoder is given by e(x) = Λ − 1 2 q(W rest x) in our model and the analog to the decoding function in the variational auto-encoder is given by…”
Section: Related Workmentioning
confidence: 99%