2001
DOI: 10.1121/1.1391240
|View full text |Cite
|
Sign up to set email alerts
|

An inverse dynamics approach to face animation

Abstract: Muscle-based models of the human face produce high quality animation but rely on recorded muscle activity signals or synthetic muscle signals that are often derived by trial and error. In this paper we present a dynamic inversion of a muscle-based model (Lucero and Munhall, 1999) that permits the animation to be created from kinematic recordings of facial movements.Using a nonlinear optimizer (Powell's algorithm) the inversion produces a muscle activity set for 7 muscles in the lower face that minimize the roo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
10
0

Year Published

2005
2005
2017
2017

Publication Types

Select...
6
2

Relationship

1
7

Authors

Journals

citations
Cited by 10 publications
(10 citation statements)
references
References 25 publications
0
10
0
Order By: Relevance
“…[TW93] proposed an animation system in which muscle actuation parameters are computed from video-based tracking of facial features. As with mass-spring systems, studies have proposed methods to recover muscle actuation parameters for those models, either through inverse dynamics approaches [EBDP96,PM01] or with a quasi-static formulation [SNF05]. As with mass-spring systems, studies have proposed methods to recover muscle actuation parameters for those models, either through inverse dynamics approaches [EBDP96,PM01] or with a quasi-static formulation [SNF05].…”
Section: Related Workmentioning
confidence: 99%
“…[TW93] proposed an animation system in which muscle actuation parameters are computed from video-based tracking of facial features. As with mass-spring systems, studies have proposed methods to recover muscle actuation parameters for those models, either through inverse dynamics approaches [EBDP96,PM01] or with a quasi-static formulation [SNF05]. As with mass-spring systems, studies have proposed methods to recover muscle actuation parameters for those models, either through inverse dynamics approaches [EBDP96,PM01] or with a quasi-static formulation [SNF05].…”
Section: Related Workmentioning
confidence: 99%
“…We show results for a dataset when grouping the markers into 15 clusters, using the above algorithm on CID sentences 1-5 and 7-10. The 15 clusters were chosen for this example because it amounted to a more than 50% reduction in the dimensionality of the marker data and it is also consistent with the number of dimensions in muscle models used for facial animation ͑e.g., Lucero and Munhall, 1999;Pitermann and Munhall, 2001͒. Our intention here is just to illustrate the results that the algorithm can produce; the appropriate number of clusters is a subject for further study.…”
Section: A Clustersmentioning
confidence: 99%
“…It has been claimed that experimental control of visual stimuli has been lacking in audiovisual speech research and a system that permitted the direct manipulation of facial movement parameters would be a significant advance ͑Munhall and Vatikiotis-Bateson, 1998͒. Answering this claim, several efforts have been undertaken to develop data-driven animation systems ͑e.g., Badin et al, 2002;Beskow, 2004;Bevacqua and Pelachaud, 2004;Kuratate et al, 1998;Lucero and Munhall, 1999;Ouni et al, 2005;Pitermann and Munhall, 2001;Zhang et al, 2004͒. In a previous work ͑Lucero and Munhall, 1999͒, we described a three-dimensional ͑3-D͒ model based on the physiological structure of the human face. The model followed the muscle-based approach of Terzopoulos and Waters ͑1990͒, and consisted of a multilayered deformable mesh that was deformed by the action of forces, generated by modeled muscles of facial expression.…”
Section: Introductionmentioning
confidence: 99%
“…Another feat has been the creation of a toolbox for calibrated EMG-informed neuro-musculoskeletal modelling (CEINMS) 32 . Reports on inverse modelling of the perioral region are scarce 33 35 , and only few involve EMG measurements 36 .…”
Section: Introductionmentioning
confidence: 99%