2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2020
DOI: 10.1109/cvpr42600.2020.00520
|View full text |Cite
|
Sign up to set email alerts
|

Disentangled and Controllable Face Image Generation via 3D Imitative-Contrastive Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
172
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
5
2
1

Relationship

2
6

Authors

Journals

citations
Cited by 240 publications
(173 citation statements)
references
References 28 publications
1
172
0
Order By: Relevance
“…It is shown that the feature maps at different resolutions of the StyleGAN architecture characterize the image styles at different scales and control the generation in a coarse-to-fine fashion. With StyleGAN, image manipulation can be achieved by embedding a real photo into the StyleGAN latent space and editing the embedded latent code for re-generation [Abdal et al 2019[Abdal et al , 2020Deng et al 2020;Karras et al 2020b;Tewari et al 2020;Zhu et al 2020]. Semantically-meaningful editing in GAN latent space has also been studied [Bau et al 2019;Brock et al 2016;Creswell and Bharath 2018;Richardson et al 2020;Yeh et al 2017;Zhu et al 2016].…”
Section: Image Generation and Editing Using Styleganmentioning
confidence: 99%
“…It is shown that the feature maps at different resolutions of the StyleGAN architecture characterize the image styles at different scales and control the generation in a coarse-to-fine fashion. With StyleGAN, image manipulation can be achieved by embedding a real photo into the StyleGAN latent space and editing the embedded latent code for re-generation [Abdal et al 2019[Abdal et al , 2020Deng et al 2020;Karras et al 2020b;Tewari et al 2020;Zhu et al 2020]. Semantically-meaningful editing in GAN latent space has also been studied [Bau et al 2019;Brock et al 2016;Creswell and Bharath 2018;Richardson et al 2020;Yeh et al 2017;Zhu et al 2016].…”
Section: Image Generation and Editing Using Styleganmentioning
confidence: 99%
“…Face image synthesis has achieved tremendous success recently with Generative Adversarial Networks (GANs) [Goodfellow et al, 2014]. State-of-the-art GAN models [Karras et al, 2019;Karras et al, 2020] can generate high-fidelity face images from the learned latent space.…”
Section: Oursmentioning
confidence: 99%
“…A successful editing should not only output high-quality results with accurate target attribute, but also well preserve all other image content characterized by the complementary attributes. Face attribute editing has attracted much attention in recent years and numerous algorithms have been proposed [Shen and Liu, 2017;Choi et al, 2018;Bahng et al, 2020;Awiszus et al, 2019;Gu et al, 2019]. Notwithstanding the…”
Section: Introductionmentioning
confidence: 99%
“…Bepler et al [2] and Detlefsen and Hauberg [3] performed disentanglement on image data by separating the latent variables into two groups assigned to appearance and perspective. Deng et al [10] also developed a disentanglement method for facial images by assigning parameters of a three-dimensional morphable face model [41] to the latent variables.…”
Section: Disentangled Representation Learningmentioning
confidence: 99%
“…In many DRL methods used in previous studies, attempts were made to discover factors of variation without any prior information or supervision. Although remarkable results were obtained by these unsupervised DRL methods using toy datasets such as dSprites [22] and 3D Shapes [23], there is no guarantee that each latent variable corresponds to a single semantically meaningful factor of variation without any inductive bias [10], [24], [25]. Hence, recent DRL studies have focused on introduction to a model of an explicit prior that imposes constraints or regulariza-tions based on the underlying structure of complicated realworld images [26], [27], such as translation and rotation [2], [28], hierarchical features [8], [9], [29] and domain-specific knowledge [10].…”
Section: Introductionmentioning
confidence: 99%