ACM SIGGRAPH 2009 Courses 2009
DOI: 10.1145/1667239.1667251
|View full text |Cite
|
Sign up to set email alerts
|

The Digital Emily project

Abstract: The Digital Emily Project was a 2008 collaboration between facial animation company Image Metrics and the Graphics Laboratory at the University of Southern California's Institute for Creative Technologies to achieve one of the world's first photorealistic digital facial performances. The project leveraged latest-generation techniques in high-resolution face scanning, character rigging, videobased facial animation, and compositing. By building an animatable face model whose expressions closely mirror the shapes… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
6
0

Year Published

2011
2011
2023
2023

Publication Types

Select...
4
4
1

Relationship

0
9

Authors

Journals

citations
Cited by 104 publications
(9 citation statements)
references
References 21 publications
(20 reference statements)
0
6
0
Order By: Relevance
“…Although there have been great advances in modeling, rigging, rendering, motion capture and retargeting techniques -with the goal of escaping the "Uncanny Valley" (Alexander, Rogers, Lambeth, Chiang, & Debevec, 2009;McDonnell, Breidt, & Bülthoff, 2012) -the creation of realistic and convincing face behaviors for games and movies is still strongly dependent on animator skills. There are some studies about how to convey and evaluate a character's complex facial behaviors (Paleari & Lisetti, 2006;Bevacqua, Mancini, Niewiadomski, & Pelachaud, 2007;Rehm, 2008;Orvalho & Sousa, 2009;Niewiadomski, Hyniewska, & Pelachaud, 2009;Queiroz, Braun, et al, 2010;Demeure, Niewiadomski, & Pelachaud, 2011;de Melo, Carnevale, & Gratch, 2011;Xolocotzin Eligio, Ainsworth, & Crook, 2012), but few really focus on microexpressions (Zielke, Dufour, & Hardee, 2011).…”
Section: Introductionmentioning
confidence: 99%
“…Although there have been great advances in modeling, rigging, rendering, motion capture and retargeting techniques -with the goal of escaping the "Uncanny Valley" (Alexander, Rogers, Lambeth, Chiang, & Debevec, 2009;McDonnell, Breidt, & Bülthoff, 2012) -the creation of realistic and convincing face behaviors for games and movies is still strongly dependent on animator skills. There are some studies about how to convey and evaluate a character's complex facial behaviors (Paleari & Lisetti, 2006;Bevacqua, Mancini, Niewiadomski, & Pelachaud, 2007;Rehm, 2008;Orvalho & Sousa, 2009;Niewiadomski, Hyniewska, & Pelachaud, 2009;Queiroz, Braun, et al, 2010;Demeure, Niewiadomski, & Pelachaud, 2011;de Melo, Carnevale, & Gratch, 2011;Xolocotzin Eligio, Ainsworth, & Crook, 2012), but few really focus on microexpressions (Zielke, Dufour, & Hardee, 2011).…”
Section: Introductionmentioning
confidence: 99%
“…9 (a). The Face2Face [33] transfers facial expressions inferred by 3D face morphable model [64], [65], [66], retrieves mouth texture by the facial expression and renders the talking face. Our method produces competitive results with realistic texture and mouth movement, suggesting that our Style Translation Network learns accurate mouth movement from the input audio.…”
Section: ) Comparisons To Audio/text Driven Dubbing Methodsmentioning
confidence: 99%
“…A 3D morphable face model (3DMM) produces vector space representations that capture various facial attributes such as shape, expression and pose [6,4,8,15,16]. Although the previous 3DMM methods [6,4,8] have limitations in estimating face texture and lighting conditions accurately, recent methods [15,16] overcome these limitations. We utilize the state-of-the-art 3DMM [16] to effectively capture the various facial attributes and supervise our model.…”
Section: Related Workmentioning
confidence: 99%