2017
DOI: 10.1109/mcg.2017.3271467
|View full text |Cite
|
Sign up to set email alerts
|

Data-Driven Approach to Synthesizing Facial Animation Using Motion Capture

Abstract: Producing cartoon animations is a laborious task, and there is a distinct lack of automatic tools to help animators, particularly with creating facial animation. The proposed method uses real-time video-based motion tracking to generate facial motion as input and then matches it to existing hand-created animation curves. The synthesized animations can then be refined and polished by an animator, saving considerable time in overall production.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
3
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 9 publications
(4 citation statements)
references
References 10 publications
0
3
0
Order By: Relevance
“…e amount of calculation and data required is large and the speed is very slow. ese methods for obtaining 3D information have various defects, but the biggest obstacle to the application of 3D face recognition in practice is that a large amount of data during use often consumes a lot of time, and it is difficult to meet the real-time needs in actual applications [12,13].…”
Section: Related Workmentioning
confidence: 99%
“…e amount of calculation and data required is large and the speed is very slow. ese methods for obtaining 3D information have various defects, but the biggest obstacle to the application of 3D face recognition in practice is that a large amount of data during use often consumes a lot of time, and it is difficult to meet the real-time needs in actual applications [12,13].…”
Section: Related Workmentioning
confidence: 99%
“…Largely, the approaches allow the cartoon‐like effects to be applied to the shape using additional inputs, for example, natural extensions of 2D effects can be achieved through the use of sketch‐based interfaces guiding computer‐generated deformation such as exaggeration [LGXS03], geometric constraints [NSACO05,RHC09], up to guiding an entire suggestive animation [KCGF14, KGUF16]. Example‐based techniques have also been explored to model arbitrary predefined deformations [RM13, DBB*17, RPM] or rendering styles [BCK*13] that can be triggered during animation and transferred to a target shape [BLCD02, LYKL12]. These approaches provide a fine level of control and artistic expressiveness on the visual result, but they must be set up manually for each specific shape and animation.…”
Section: Related Workmentioning
confidence: 99%
“…Hyde et al [13] conducted two experiments showing how exaggerated facial movement influences the impressions of cartoons and more realistic characters, and stated that an essential factor in diminishing the sensation of strangeness is the attempt to replicate human expressions (body and facial) in CG characters. Ruhland et al [14] used algorithms to synthesize real-time motion capture of human expressions with animation data created by designers. To validate synthesized animations, they conducted a perceptual study, and results indicated that the animations had an expressive similarity to animations made by hand.…”
Section: Related Workmentioning
confidence: 99%