2020
DOI: 10.1145/3414685.3417826
|View full text |Cite
|
Sign up to set email alerts
|

Face identity disentanglement via latent space mapping

Abstract: Learning disentangled representations of data is a fundamental problem in artificial intelligence. Specifically, disentangled latent representations allow generative models to control and compose the disentangled factors in the synthesis process. Current methods, however, require extensive supervision and training, or instead, noticeably compromise quality. In this paper, we present a method that learns how to represent data in a disentangled way, with minimal supervision, manifested solely using ava… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
75
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 109 publications
(75 citation statements)
references
References 33 publications
0
75
0
Order By: Relevance
“…However, it cannot accurately integrate identity and attribute information because of its simple encoder structure and the constraint of W potential space. Therefore, although it can generate high-quality images (Figure 3, column 6), it is not as good as the proposed method for fusing semantic information, which is reflected in the (Nirkin et al, 2019), FaceShifter (Li et al, 2019;Nitzan et al, 2020) on the CelebAMask-HQ (Lee et al, 2020) test dataset.…”
Section: Qualitative Comparison With Previous Methodsmentioning
confidence: 97%
See 4 more Smart Citations
“…However, it cannot accurately integrate identity and attribute information because of its simple encoder structure and the constraint of W potential space. Therefore, although it can generate high-quality images (Figure 3, column 6), it is not as good as the proposed method for fusing semantic information, which is reflected in the (Nirkin et al, 2019), FaceShifter (Li et al, 2019;Nitzan et al, 2020) on the CelebAMask-HQ (Lee et al, 2020) test dataset.…”
Section: Qualitative Comparison With Previous Methodsmentioning
confidence: 97%
“…We compare the proposed method with FSGAN (Nirkin et al, 2019), FaceShifter (Li et al, 2019;Nitzan et al, 2020) on the CelebAMask-HQ (Lee et al, 2020) test dataset. Figure 3 shows, as expected because the proposed method is based on a pretrained StyleGAN (Karras et al, 2019) with high-quality face-generation capabilities, that all the generation results (Figure 3, column 6) are stable and clear enough that there are no errors such as artifacts and abnormal illumination.…”
Section: Qualitative Comparison With Previous Methodsmentioning
confidence: 99%
See 3 more Smart Citations