2016
DOI: 10.48550/arxiv.1611.05013
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

PixelVAE: A Latent Variable Model for Natural Images

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
82
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
4
4
1

Relationship

0
9

Authors

Journals

citations
Cited by 60 publications
(82 citation statements)
references
References 0 publications
0
82
0
Order By: Relevance
“…Variational Autoencoders (VAEs) [1,2] are effectively deep generative models that utilize the variational inference and reparameterization trick for dimension reduction [3], learning representations [4], and generating data [5]. And various variants of VAE have been proposed conditioned on advanced variational posterior [6][7][8], powerful decoders [9,10] and flexible priors [10,17,12,13].…”
Section: Bayesian Pseudocoresets Exemplar Vaementioning
confidence: 99%
See 1 more Smart Citation
“…Variational Autoencoders (VAEs) [1,2] are effectively deep generative models that utilize the variational inference and reparameterization trick for dimension reduction [3], learning representations [4], and generating data [5]. And various variants of VAE have been proposed conditioned on advanced variational posterior [6][7][8], powerful decoders [9,10] and flexible priors [10,17,12,13].…”
Section: Bayesian Pseudocoresets Exemplar Vaementioning
confidence: 99%
“…In particular, due to the utilization of the reparameterization trick and variational inference for optimization, Variational Autoencoders (VAEs) [1,2] stand out and have demonstrated significant successes for dimension reduction [3], learning representations [4], and generating data [5]. In addition, various variants of VAE have been proposed conditioned on advanced variational posterior [6][7][8] or powerful decoders [9,10].…”
Section: Introductionmentioning
confidence: 99%
“…Autoencoder Autoencoders have the ability to learn the effective representation of input data in an unsupervised way. It is mainly used for data compression [13][14][15], representation learning [16,17] and as a generative model [18][19][20]. In the framework of an autoencoder [21,22], an encoder E θ can effectively calculate h = E θ (x) from an input x.…”
Section: Related Workmentioning
confidence: 99%
“…(1) VAE has shown promising ability to generate complicated data, including faces [11], natural images [8], text [25] and segmentation [10,27]. Following IntroVAE [11], we adopt introspective adversarial learning in our method to produce high-quality and realistic face images.…”
Section: Variational Autoencodermentioning
confidence: 99%