2017
DOI: 10.48550/arxiv.1711.10678
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

AttGAN: Facial Attribute Editing by Only Changing What You Want

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
93
0

Year Published

2018
2018
2021
2021

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 31 publications
(93 citation statements)
references
References 0 publications
0
93
0
Order By: Relevance
“…Besides producing impressive image samples, generative adversarial networks (GANs) [9] have been shown to learn meaningful latent spaces [18] with extensive studies on multiple derived spaces [15,44] and various knobs and controls for conditional human face generation [12,28,42]. Encoding an image to the GAN's latent space requires an optimization-based inversion process [19,45] or an external image encoder [30], which has limited reconstruction fidelity (or produces latent codes in much higher dimensions outside the learned manifold).…”
Section: Related Workmentioning
confidence: 99%
“…Besides producing impressive image samples, generative adversarial networks (GANs) [9] have been shown to learn meaningful latent spaces [18] with extensive studies on multiple derived spaces [15,44] and various knobs and controls for conditional human face generation [12,28,42]. Encoding an image to the GAN's latent space requires an optimization-based inversion process [19,45] or an external image encoder [30], which has limited reconstruction fidelity (or produces latent codes in much higher dimensions outside the learned manifold).…”
Section: Related Workmentioning
confidence: 99%
“…ELEGANT [27] applies a U-Net structure [22] on the basis of DNA-GAN for high-resolution image generation. Star-GAN [4] and AttGAN [8] realize attribute transfer by introducing an attribute classification loss. Moreover, Star-GAN [4] designs a conditional attribute transfer network to learn attributes in a cyclic process, while AttGAN [8] devises an encoder-decoder architecture to model the relationship between the latent representations and the attributes.…”
Section: Image Attribute Transfermentioning
confidence: 99%
“…Star-GAN [4] and AttGAN [8] realize attribute transfer by introducing an attribute classification loss. Moreover, Star-GAN [4] designs a conditional attribute transfer network to learn attributes in a cyclic process, while AttGAN [8] devises an encoder-decoder architecture to model the relationship between the latent representations and the attributes. Both StarGAN and AttGAN are designed for learning multiple attributes simultaneously.…”
Section: Image Attribute Transfermentioning
confidence: 99%
“…Recently, class-conditional extensions of GANs were applied to not only to image generation but also to image-to-image translation [6,14,44,61] (including Star-GAN [6]). Their goal is to achieve multi-domain image-to-image translation (i.e., to obtain mappings among multiple domains) using few-parameter models.…”
Section: Related Workmentioning
confidence: 99%
“…network architecture is the same as that of StarGAN [6]. 14 As a GAN objective, we used WGAN-GP [12] and trained the model using the Adam optimizer [22] with a minibatch of size 16. We set the parameters to the default values of the StarGAN, i.e., λ rec = 10, λ GP = 10, n D = 5, α = 0.0001, β 1 = 0.5, and β 2 = 0.999.…”
Section: C3 Clothing1mmentioning
confidence: 99%