2022
DOI: 10.1007/978-3-031-19784-0_4
|View full text |Cite
|
Sign up to set email alerts
|

Designing One Unified Framework for High-Fidelity Face Reenactment and Swapping

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 11 publications
(6 citation statements)
references
References 41 publications
0
5
0
Order By: Relevance
“…As a result, although slightly affected by Salt and Pepper (95.99%) and Gaussian Blur (97.11%), we successfully maintained state-of-the-art accuracies above 95% for all manipulations. Furthermore, an obvious accuracy decline on UniFace [43] can be observed in Table 2 and Table 4 at 256 × 256 resolutions on both CelebA-HQ [16] and LFW. Besides the challenges caused by cross manipulation, this is mainly due to the setting of accepting only half precision floating-points inputs by UniFace.…”
Section: Cross-dataset Experimentsmentioning
confidence: 95%
See 2 more Smart Citations
“…As a result, although slightly affected by Salt and Pepper (95.99%) and Gaussian Blur (97.11%), we successfully maintained state-of-the-art accuracies above 95% for all manipulations. Furthermore, an obvious accuracy decline on UniFace [43] can be observed in Table 2 and Table 4 at 256 × 256 resolutions on both CelebA-HQ [16] and LFW. Besides the challenges caused by cross manipulation, this is mainly due to the setting of accepting only half precision floating-points inputs by UniFace.…”
Section: Cross-dataset Experimentsmentioning
confidence: 95%
“…P common consists of the following manipulations for both training and testing: Dropout, Resize, Jpeg Compression, Gaussian Noise, Salt and Pepper, Gaussian Blur, and Median Blur. P swap includes Sim-Swap [4] solely in the training phase and adopts SimSwap, InfoSwap [8], and UniFace [43] with state-of-the-art performance for cross-manipulation evaluation in the testing phase. Since there are various common manipulations to be considered and they only bring noises without modifying the facial contents, we first pre-trained the model using P common for sufficient iterations.…”
Section: Implementation Detailsmentioning
confidence: 99%
See 1 more Smart Citation
“…Although some general representation learning methods have achieved good performance, the method designed for face images still has better results than the one training on the general dataset [32]. Supervised representation learning methods [21,22,26,30,31] need amounts of manually labeled data and suffer from overfitting problems with a large number of model parameters. Recently some tasks [18,32] use Web text-image pairs as pre-train dataset, which achieve amazing representation learning results.…”
Section: Related Work 21 Facial Representation Learningmentioning
confidence: 99%
“…In one approach to face swapping, the reconstruction loss between the swapped and target images is employed when the source and target images share the same identity. [5,32,20,35,34,33]. However, applying the reconstruction loss in certain scenarios necessitates the use of identitylabeled image datasets [24,4] or video datasets [23,6].…”
Section: Introductionmentioning
confidence: 99%