2021
DOI: 10.1109/tbiom.2021.3049576
|View full text |Cite
|
Sign up to set email alerts
|

Head2Head++: Deep Facial Attributes Re-Targeting

Abstract: This document is made available in accordance with publisher policies and may differ from the published version or from the version of record. If you wish to cite this item you are advised to consult the publisher's version. Please see the URL above for details on accessing the published version.

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
14
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
2
2

Relationship

2
5

Authors

Journals

citations
Cited by 33 publications
(14 citation statements)
references
References 46 publications
(85 reference statements)
0
14
0
Order By: Relevance
“…Traditional techniques [42], [43] perform 3D face reconstruction on the reference video and render the target subject under the source expressions on top of the original target footage. Learning-based methods, like DVP [23] and Head2Head++ [13] use conditional GANs to render the target subject under the given conditions (expressions, pose, eye-gaze). Nevertheless, these methods offer no semantic control over the generated video, as they directly copy the expressions from a source actor.…”
Section: Related Workmentioning
confidence: 99%
See 3 more Smart Citations
“…Traditional techniques [42], [43] perform 3D face reconstruction on the reference video and render the target subject under the source expressions on top of the original target footage. Learning-based methods, like DVP [23] and Head2Head++ [13] use conditional GANs to render the target subject under the given conditions (expressions, pose, eye-gaze). Nevertheless, these methods offer no semantic control over the generated video, as they directly copy the expressions from a source actor.…”
Section: Related Workmentioning
confidence: 99%
“…We use FAN [4] to obtain 68 facial landmarks for each frame. Afterwards, similarly to [13], we estimate eye pupil coordinates based on the inverse intensities of the pixels within the eye area and create eye images E ∈ R 256×256×3 that provide the face renderer with information about the eyegaze. However, in contrast to [13], we only draw two red disks around eye pupils and not the edges of the outline.…”
Section: D Face Analysismentioning
confidence: 99%
See 2 more Smart Citations
“…For instance, Face2Face [40] method performs face reenactment, by recovering facial expressions from a driving video and overwriting them to the source frames. Some recent learningbased approaches [23,25,13] have sought to solve the problem of full head reenactment, which aims to transfer not only the expression, but also the pose, from a driving person to the source identity. The shortcoming of such methods is their dependence on long video footage of the source, as they train person-specific models.…”
Section: Introductionmentioning
confidence: 99%