Variational Methods for Machine Learning With Applications to Deep Networks 2021
DOI: 10.1007/978-3-030-70679-1_5
|View full text |Cite
|
Sign up to set email alerts
|

Variational Autoencoder

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
11
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 27 publications
(12 citation statements)
references
References 11 publications
0
11
0
Order By: Relevance
“…[ 12 , 27 ]. A Variational Autoencoder (VAE) assumes that the data are sampled from an arbitrary statistical distribution [ 28 ]. It is trained in an unsupervised manner with an encoder that provides a low-dimensional latent representation of the data vector, and a decoder which attempts to reconstruct the input vector.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…[ 12 , 27 ]. A Variational Autoencoder (VAE) assumes that the data are sampled from an arbitrary statistical distribution [ 28 ]. It is trained in an unsupervised manner with an encoder that provides a low-dimensional latent representation of the data vector, and a decoder which attempts to reconstruct the input vector.…”
Section: Resultsmentioning
confidence: 99%
“…It is trained in an unsupervised manner with an encoder that provides a low-dimensional latent representation of the data vector, and a decoder which attempts to reconstruct the input vector. The encoder transforms its input into the parameters of a multidimensional statistical distribution, and sampling occurs where a point is drawn from the encoded distribution and fed into the decoder [ 28 ]. It can be seen as a probabilistic version of AE that can generate new data and transform existing data within an encoding–modification–decoding scheme [ 29 ].…”
Section: Resultsmentioning
confidence: 99%
“…The high expressiveness of the Gaussian distribution allows it to describe many phenomena in the real world. According to [30], the Gaussian distribution assumption allows VAE to utilize the reparameterization trick, which enhances training efficiency without reducing its fitting capability. VAE first approximates the Gaussian distribution 𝑁 (𝜇, 𝜎 2 ) by computing 𝜇 and 𝜎 with two neural network models.…”
Section: Vae Encodermentioning
confidence: 99%
“…We introduce non-linearity into the model by adding an activation layer. Following the common practice in [30], we choose ReLU and sigmoid as the activation function for 𝜇 and 𝜎, respectively (Equations 8 and 9).…”
Section: Vae Encodermentioning
confidence: 99%
“…During the model training process, which is usually conducted based on an Expectation-Maximization meta-algorithm, the encoding distribution was “regularized”, so that the resulting latent space sufficed to generate new and meaningful datasets. The detailed mathematical derivation will be discussed in Section 3 , and readers can also refer to [ 34 ] for more technical details. The VAE model was first proposed by Kingma and Welling [ 35 ], and has been widely applied in different disciplines, for example, image generation, data classification and dimensionality reduction [ 36 , 37 , 38 ].…”
Section: Introductionmentioning
confidence: 99%