2015
DOI: 10.48550/arxiv.1512.00570
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Attribute2Image: Conditional Image Generation from Visual Attributes

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
52
0

Year Published

2016
2016
2021
2021

Publication Types

Select...
6
3
1

Relationship

0
10

Authors

Journals

citations
Cited by 50 publications
(52 citation statements)
references
References 28 publications
0
52
0
Order By: Relevance
“…After training, generation is done by sampling a vector from the latent space, concatenating it with the desired label and forwarding it through the decoder to obtain the output. More technical details regarding CVAEs can be found in [8], [31].…”
Section: Modeling and Conditional Training/generationmentioning
confidence: 99%
“…After training, generation is done by sampling a vector from the latent space, concatenating it with the desired label and forwarding it through the decoder to obtain the output. More technical details regarding CVAEs can be found in [8], [31].…”
Section: Modeling and Conditional Training/generationmentioning
confidence: 99%
“…In addition to training a standard VAE on each sketch domain, we also trained a conditional VAE (CVAE) [27,37] on sketches from all domains taken together, with each sketch labeled with its corresponding domain. Conditional generative models [15], as the name suggests, enable generation of outputs conditioned on some given input.…”
Section: Conditional Sketch Generationmentioning
confidence: 99%
“…We showed its achievable via the proposed CoGAN framework. Note that our work is different to the Attribute2Image work [27], which is based on a conditional VAE model [28]. The conditional model can be used to generate images of different styles, but they are unsuitable for generating images in two different domains such as color and depth image domains.…”
Section: Related Workmentioning
confidence: 99%