Proceedings of the 12th ACM SIGGRAPH/Eurographics Symposium on Computer Animation 2013
DOI: 10.1145/2485895.2485915
|View full text |Cite
|
Sign up to set email alerts
|

High fidelity facial animation capture and retargeting with contours

Abstract: Figure 1: Our performance-capture approach excels at capturing and retargeting mouth and eyelid motion accurately. AbstractHuman beings are naturally sensitive to subtle cues in facial expressions, especially in areas of the eyes and mouth. Current facial motion capture methods fail to accurately reproduce motions in those areas due to multiple limitations. In this paper, we present a new performance capture method that focuses on the perceptually important contour features on the face. Additionally, the outpu… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
42
0

Year Published

2014
2014
2023
2023

Publication Types

Select...
4
4

Relationship

0
8

Authors

Journals

citations
Cited by 42 publications
(42 citation statements)
references
References 48 publications
(33 reference statements)
0
42
0
Order By: Relevance
“…With another purpose, but related to identifying facial expressions in synthetic faces, is the work of (Fernandes, Alves, Miranda, Queirós, & Orvalho, 2011) that presents a system to help children with autism to learn about facial expressions using CG characters. Some recent work focused in facial motion capture and real-time retargeting (Bhat, Goldenthal, Ye, Mallet, & Koperwas, 2013;H. Li, Yu, Ye, & Bregler, 2013) also discuss the importance of preserving microexpressions, although they do not guarantee it yet: the framerate of and inherent noise in the ordinary imaging devices used in this kind of application present technical challenges to robustly capturing microexpressions.…”
Section: Related Workmentioning
confidence: 99%
“…With another purpose, but related to identifying facial expressions in synthetic faces, is the work of (Fernandes, Alves, Miranda, Queirós, & Orvalho, 2011) that presents a system to help children with autism to learn about facial expressions using CG characters. Some recent work focused in facial motion capture and real-time retargeting (Bhat, Goldenthal, Ye, Mallet, & Koperwas, 2013;H. Li, Yu, Ye, & Bregler, 2013) also discuss the importance of preserving microexpressions, although they do not guarantee it yet: the framerate of and inherent noise in the ordinary imaging devices used in this kind of application present technical challenges to robustly capturing microexpressions.…”
Section: Related Workmentioning
confidence: 99%
“…We believe that such a database can be used to determine a correct regression model without the need for calibration before each use. Similar to head mounted cameras used in production for motion capture [Bhat et al 2013], we attach an RGB-D camera to record the mouth region of the subject, but still intend to design a more ergonomic solution with either smaller and closer range cameras, or even alternative sensors. To advance the capture capabilities of our system, we hope to increase the number of strain gauges and combine our system with other sensors such as HMD integrated eye tracking systems [SMI 2014].…”
Section: Resultsmentioning
confidence: 99%
“…In the entertainment industry, facial performance capture is an established approach to improve the efficiency of animation production by reducing manual key-framing tasks and minimizing complex physical simulations of facial biomechanics [Terzopoulos and Waters 1990;Sifakis et al 2005]. To achieve the highest possible facial tracking fidelity, marker-based solutions, hand-assisted tracking, and multi-camera settings are still commonly employed [Bhat et al 2013;Bickel et al 2007;Pighin and Lewis 2006], while often requiring intensive computation.…”
Section: Previous Workmentioning
confidence: 99%
“…Kholgade et al [2011] developed a layered composition model of expressions for retargeting facial performances from the source model to a target character with dissimilar facial structure. More recently, blendshape mapping has been combined with facial tracking to animate a target character model [Bhat et al 2013;Bouaziz et al 2013;Li et al 2013]. Blendshape mapping, however, often requires skilled artists to manually create the facial rig to ensure the quality of output animation, but this is a slow, labor-intensive and costly process.…”
Section: Facial Expression Retargetingmentioning
confidence: 99%
“…In addition, the facial features around the eyes and mouth are especially important for high quality facial animation [Bhat et al 2013]. For example, commercial applications such as Live Driver [ImageMetrics ] have obtained impressive results for facial puppetry by tracking only those facial features around those regions.…”
Section: Problem Formulationmentioning
confidence: 99%