ACM SIGGRAPH 2005 Papers 2005
DOI: 10.1145/1186822.1073209
|View full text |Cite
|
Sign up to set email alerts
|

Face transfer with multilinear models

Abstract: Figure 1: Face Transfer with multilinear models gives animators decoupled control over facial attributes such as identity, expression, and viseme. In this example, we combine pose and identity from the first frame, surprised expression from the second, and a viseme (mouth articulation for a sound midway between "oo" and "ee") from the third. The resulting composite is blended back into the original frame. AbstractFace Transfer is a method for mapping videorecorded performances of one individual to facial anima… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
105
0

Year Published

2006
2006
2013
2013

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 120 publications
(105 citation statements)
references
References 39 publications
(17 reference statements)
0
105
0
Order By: Relevance
“…When data is missing, the gaps have to be filled. A demonstration of video driven animation with multilinear models (including the missing data problem), was demonstrated by Vlasic et al [24]. Although an experienced observer can make out the manipulation, the approach looks very promising.…”
Section: Multiplicativementioning
confidence: 99%
See 1 more Smart Citation
“…When data is missing, the gaps have to be filled. A demonstration of video driven animation with multilinear models (including the missing data problem), was demonstrated by Vlasic et al [24]. Although an experienced observer can make out the manipulation, the approach looks very promising.…”
Section: Multiplicativementioning
confidence: 99%
“…Compared to matrix SVD, N-mode SVD, does not result in an optimal solution and further refinement is required [24]. Unfortunately, many matrix SVD properties do not hold for N-mode SVD.…”
Section: Multiplicativementioning
confidence: 99%
“…Multilinear face models have also been successfully applied by Vlasic et. all [41] to face analysis. However, they use 3D scanned face data and pose variations are not explictly modeled.…”
Section: Previous Workmentioning
confidence: 99%
“…The data analysis may be based on machine learning [1,2,4,6,7] or probabilistic framework [5]. Ezzat et al [1] employ a variant of MMM to synthesize mouth configurations of a novel speech.…”
Section: Related Workmentioning
confidence: 99%
“…Voice Puppetry [5] utilizes a probabilistic framework to find an optimal trajectory for the whole utterance based on facial gestures learned from videos. Vlasic et al [7] use a multilinear model to separate different attributes of facial models, e.g. expression and visemes, and connect multilinear model directly to videos for a time-series of poses and attribute parameters.…”
Section: Related Workmentioning
confidence: 99%