Proceedings of Motion on Games 2013
DOI: 10.1145/2522628.2522904
|View full text |Cite
|
Sign up to set email alerts
|

A Practical and Configurable Lip Sync Method for Games

Abstract: We demonstrate a lip animation (lip sync) algorithm for real-time applications that can be used to generate synchronized facial movements with audio generated from natural speech or a text-to-speech engine. Our method requires an animator to construct animations using a canonical set of visemes for all pairwise combinations of a reduced phoneme set (phone bigrams). These animations are then stitched together to construct the final animation, adding velocity and lip-pose constraints. This method can be applied … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
27
0

Year Published

2014
2014
2021
2021

Publication Types

Select...
4
4
2

Relationship

1
9

Authors

Journals

citations
Cited by 49 publications
(27 citation statements)
references
References 23 publications
0
27
0
Order By: Relevance
“…words or phonemes) [Bregler et al 1997;Cao et al 2005;Liu and Ostermann 2012;Mattheyses et al 2013;Xu et al 2013] or variable length [Cosatto and Graf 2000;Edwards et al 2016;Ma et al 2006;Taylor et al 2012] units. Unit selection typically involves minimizing a cost function based on the phonetic context and the smoothness.…”
Section: Related Workmentioning
confidence: 99%
“…words or phonemes) [Bregler et al 1997;Cao et al 2005;Liu and Ostermann 2012;Mattheyses et al 2013;Xu et al 2013] or variable length [Cosatto and Graf 2000;Edwards et al 2016;Ma et al 2006;Taylor et al 2012] units. Unit selection typically involves minimizing a cost function based on the phonetic context and the smoothness.…”
Section: Related Workmentioning
confidence: 99%
“…Better results can be obtained by animating or learning combinations of three phonemes or even longer sequences. Figure 19 illustrates the architecture of lip syncing method following Quieruze [31] and Ari Shariro [44]. Basically, this method receives a TTS file or Audio file as input containing the character's speech and an auxiliary file containing its textual description.…”
Section: Proposed a Prophone Lip Syncing Methodsmentioning
confidence: 99%
“…Resorting to a Microsoft speech library [13], the Speech Generator module creates two files: the audio data file and the animation data file which are used by the Speech Animation Controller to synchronize the audio with the blend shapes animation.…”
Section: Fra12mentioning
confidence: 99%