2020
DOI: 10.48550/arxiv.2005.09635
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

InterFaceGAN: Interpreting the Disentangled Face Representation Learned by GANs

Abstract: Although Generative Adversarial Networks (GANs) have made significant progress in face synthesis, there lacks enough understanding of what GANs have learned in the latent representation to map a randomly sampled code to a photo-realistic face image. In this work, we propose a framework, called InterFaceGAN, to interpret the disentangled face representation learned by the state-of-the-art GAN models and thoroughly analyze the properties of the facial semantics in the latent space. We first find that GANs actual… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
43
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 18 publications
(47 citation statements)
references
References 43 publications
(83 reference statements)
0
43
0
Order By: Relevance
“…It suggests that our method can achieve fine-grained controls on local semantic regions of generated images. Note that, our proposed method can perform various local attribute editing tasks, which is much more than previous methods [10,25,26,33]. Then we further visualize the control units for several attribute manipulations in Figure 4.…”
Section: Results Of Local Attributes Manipulationmentioning
confidence: 99%
See 4 more Smart Citations
“…It suggests that our method can achieve fine-grained controls on local semantic regions of generated images. Note that, our proposed method can perform various local attribute editing tasks, which is much more than previous methods [10,25,26,33]. Then we further visualize the control units for several attribute manipulations in Figure 4.…”
Section: Results Of Local Attributes Manipulationmentioning
confidence: 99%
“…We validated our approach on a variety of local attribute editing tasks. We apply InterfaceGAN [25] to find the initial direction vectors to be cropped corresponding to those attributes that have annotated by [20]. For other attributes that cannot train an available classifier, we directly use the difference between the modulation styles of a few annotated positive and negative samples as the direction vectors.…”
Section: Results Of Local Attributes Manipulationmentioning
confidence: 99%
See 3 more Smart Citations