ACM SIGGRAPH 2011 Papers 2011
DOI: 10.1145/1964921.1964970
|View full text |Cite
|
Sign up to set email alerts
|

High-quality passive facial performance capture using anchor frames

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
108
0

Year Published

2012
2012
2022
2022

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 127 publications
(109 citation statements)
references
References 25 publications
0
108
0
Order By: Relevance
“…This problem is eliminated with the non- (frames 0, 121, 160, 195, 240, 260, 341, 354) Fig. 9 Comparison of non-sequential alignment using anchor frames of a single expression (Beeler et al 2011) and minimum spanning tree (data courtesy Beeler et al Disney Research) sequential alignment where the pattern remains accurately aligned with the face throughout the sequence.…”
Section: Results Of Global Alignmentmentioning
confidence: 99%
See 3 more Smart Citations
“…This problem is eliminated with the non- (frames 0, 121, 160, 195, 240, 260, 341, 354) Fig. 9 Comparison of non-sequential alignment using anchor frames of a single expression (Beeler et al 2011) and minimum spanning tree (data courtesy Beeler et al Disney Research) sequential alignment where the pattern remains accurately aligned with the face throughout the sequence.…”
Section: Results Of Global Alignmentmentioning
confidence: 99%
“…A comparison of facial alignment using the anchor frame approach of Beeler et al (2011) to our non-sequential approach is presented in Fig. 9 using publicly available datasets.…”
Section: Results Of Global Alignmentmentioning
confidence: 99%
See 2 more Smart Citations
“…Representative techniques include three-dimensional (3D) model-based methods and twodimensional (2D) image-based methods. The 3D model-based methods include blendshapes with several shape models [12], [22] and expression retargeting with a motion-capture system [3], [19]. The 2D image-based methods synthesize mouth animations using a prepared video corpus [5], [8].…”
Section: Introductionmentioning
confidence: 99%