Proceedings of the 3rd Symposium on Applied Perception in Graphics and Visualization 2006
DOI: 10.1145/1140491.1140508
|View full text |Cite
|
Sign up to set email alerts
|

Semantic 3D motion retargeting for facial animation

Abstract: We present a system for realistic facial animation that decomposes facial Motion Capture data into semantically meaningful motion channels based on the Facial Action Coding System. A captured performance is retargeted onto a morphable 3D face model based on a semantically corresponding set of 3D scans. The resulting facial animation reveals a high level of realism by combining the high spatial resolution of a 3D scanner with the high temporal accuracy of motion capture data that accounts for subtle facial move… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
45
0
1

Year Published

2007
2007
2015
2015

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 47 publications
(47 citation statements)
references
References 26 publications
1
45
0
1
Order By: Relevance
“…Each unit is then plotted as a time course so that the spatiotemporal properties of local movements can be represented. This technique has been applied to motion--capture data to create highly controlled and meaningful facial animations (e.g., Curio et al, 2006;Dobs et al, 2014). The advantage here is that facial motion is annotated accurately and precisely with reference to underlying muscle activations.…”
Section: Comparison Of Methodology With Other Approachesmentioning
confidence: 99%
See 1 more Smart Citation
“…Each unit is then plotted as a time course so that the spatiotemporal properties of local movements can be represented. This technique has been applied to motion--capture data to create highly controlled and meaningful facial animations (e.g., Curio et al, 2006;Dobs et al, 2014). The advantage here is that facial motion is annotated accurately and precisely with reference to underlying muscle activations.…”
Section: Comparison Of Methodology With Other Approachesmentioning
confidence: 99%
“…The advantage here is that facial motion is annotated accurately and precisely with reference to underlying muscle activations. It is also easy to retarget motion onto any face model that uses the same semantic structure (Curio et al, 2006). Yet, these FACS derived animations typically present only nonrigid motion -- that is, facial expressions without changes in viewpoint.…”
Section: Comparison Of Methodology With Other Approachesmentioning
confidence: 99%
“…A quantitative evaluation is based on the computation of a numerical difference between the source and the target expressions. Previous results, whether qualitative [23] or quantitative [26], tell us that a perfect match between the source and the target is never achieved. The animation pipeline presented in this work is evaluated, qualitatively, by measuring the recognizability of the six basic facial emotions defined by Ekman [1].…”
Section: Online Puppetry and Performance Capturementioning
confidence: 94%
“…In performance capture, the puppeteer's face usually does not match the puppet's face: they often have different topology and morphology. The mapping of the puppeteer's facial motion on the puppet face is called retargeting [7,[21][22][23].…”
Section: Online Puppetry and Performance Capturementioning
confidence: 99%
“…For instance, some facial animation softwares are based on audio capture [5] and others are based on motion capture [7,6]. Independently of the approach used, it is fundamental that the capture solution assure lip movements synchronization with audio.…”
Section: Introductionmentioning
confidence: 99%