2015
DOI: 10.1145/2816795.2818056
|View full text |Cite
|
Sign up to set email alerts
|

Real-time expression transfer for facial reenactment

Abstract: We present a method for the real-time transfer of facial expressions from an actor in a source video to an actor in a target video, thus enabling the ad-hoc control of the facial expressions of the target actor. The novelty of our approach lies in the transfer and photorealistic re-rendering of facial deformations and detail into the target video in a way that the newly-synthesized expressions are virtually indistinguishable from a real video. To achieve this, we accurately capture the facial performances of t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
257
0
2

Year Published

2016
2016
2024
2024

Publication Types

Select...
3
3
2

Relationship

1
7

Authors

Journals

citations
Cited by 362 publications
(278 citation statements)
references
References 59 publications
0
257
0
2
Order By: Relevance
“…Our sparse regression takes 10ms for the medium layer and 2sec for the fine-scale detail layer. We believe that a drastic reduction of the computation time is possible by harnessing the data-parallel processing power of modern GPUs, as recently demonstrated for nonlinear optimization [Thies et al 2015;]. …”
Section: Resultsmentioning
confidence: 96%
See 1 more Smart Citation
“…Our sparse regression takes 10ms for the medium layer and 2sec for the fine-scale detail layer. We believe that a drastic reduction of the computation time is possible by harnessing the data-parallel processing power of modern GPUs, as recently demonstrated for nonlinear optimization [Thies et al 2015;]. …”
Section: Resultsmentioning
confidence: 96%
“…Shi et al [2014] use a very similar tracking approach, but do not extract a high-fidelity parametrized 3D rig that contains a generative wrinkle formation model capturing the person-specific idiosyncrasies. Recently, Thies et al [2015] presented an approach for real-time facial reenactment, but the method cannot handle fine-scale surface detail and requires RGB-D camera input. Cao et al [2014a] use a learned regression model to fit, in real time, a generic identity and expression model to RGB face video.…”
Section: Related Workmentioning
confidence: 99%
“…A depth based method was introduced, which uses a parametric 3D facial model to robustly deal with the noisy depth input in [37]. Recently a state-of-theart depth based tracking method with parametric 3D facial geometry and lighting model has been proposed for realtime facial expression transfer and reenactment in [33]. Due to the limited depth sensor resolution, RGB color input is used to supplement extra information to refine the tracking.…”
Section: Expression Clusteringmentioning
confidence: 99%
“…Our method instead deals with transferring expression onto a target character from images or videos, where previous approaches are either not applicable or rely on depth information . We need to track the facial mesh of the actor and target as well as modelling their appearances, which is difficult to achieve real‐time performance.…”
Section: Related Workmentioning
confidence: 99%