2022
DOI: 10.1007/978-3-031-19778-9_4
|View full text |Cite
|
Sign up to set email alerts
|

$$\text {Face2Face}^\rho $$: Real-Time High-Resolution One-Shot Face Reenactment

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
6
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
2
2

Relationship

0
7

Authors

Journals

citations
Cited by 11 publications
(6 citation statements)
references
References 47 publications
0
6
0
Order By: Relevance
“…To validate the performance of the proposed method, we compare our proposed face interactive coding scheme with the latest hybrid video coding standard VVC (VTM10.0) [3] and five generative compression schemes, including FOMM [16], FOMM2.0 [18], CFTE [24], Face vid2vid [5] and Face2FaceRHO [81]. We adopt three learning-based visual quality measures, including Deep Image Structure and Texture Similarity (DISTS) [94], Learned Perceptual Image Patch Similarity (LPIPS) [95] and Fréchet inception distance (FID) [96] to evaluate the reconstruction quality of generation results [97].…”
Section: Comparison Methodsmentioning
confidence: 99%
See 3 more Smart Citations
“…To validate the performance of the proposed method, we compare our proposed face interactive coding scheme with the latest hybrid video coding standard VVC (VTM10.0) [3] and five generative compression schemes, including FOMM [16], FOMM2.0 [18], CFTE [24], Face vid2vid [5] and Face2FaceRHO [81]. We adopt three learning-based visual quality measures, including Deep Image Structure and Texture Similarity (DISTS) [94], Learned Perceptual Image Patch Similarity (LPIPS) [95] and Fréchet inception distance (FID) [96] to evaluate the reconstruction quality of generation results [97].…”
Section: Comparison Methodsmentioning
confidence: 99%
“…Yao et al [77] introduced graph convolutional networks (GCNs) to learn the optical flow from the reconstructed 3D meshes for oneshot face synthesis. In addition, Doukas et al [78], Zhang et al [79], Ren et al [80] and Yang et al [81] all leveraged the facial 3DMM in the controllable portrait generation from different strategies of multi-modality, multi-task learning or hierarchical motion prediction. Moreover, promising results in 3DMM-based face generative compression have also been shown in [82], [83].…”
Section: D Face Modeling With Deep Learningmentioning
confidence: 99%
See 2 more Smart Citations
“…However, such methods perform poorly on the challenging task of cross-subject reenactment, since facial landmarks preserve the facial shape and consequently the identity geometry of the target face. In order to mitigate the identity leakage from the target face to the source face, several methods [28,17,45,9,15,37,26,59] leverage the disentangled properties of 3D Morphable Models (3DMM) [8,16]. Xu et al [57] propose a unified architecture that learns to perform both face reenactment and swapping.…”
Section: Related Workmentioning
confidence: 99%