2010 International Computer Symposium (ICS2010) 2010
DOI: 10.1109/compsym.2010.5685459
|View full text |Cite
|
Sign up to set email alerts
|

Co-articulation generation using maximum direction change and apparent motion for Chinese visual speech synthesis

Abstract: This study presents an approach for automated lip synchronization and smoothing for Chinese visual speech synthesis. A facial animation system with synchronization algorithm is also developed to visualize an existent Text-ToSpeech system. Motion parameters for each viseme are first constructed from video footage of a human speaker. To synchronize the parameter set sequence and speech signal, a maximum direction change algorithm is also proposed to select significant parameter set sequences according to the spe… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2013
2013
2016
2016

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(1 citation statement)
references
References 18 publications
(18 reference statements)
0
1
0
Order By: Relevance
“…Visual feedback has been extensively developed over the last decade in a diverse array of applications (Yu and Wang, 2015;Liu et al, 2014;Wu et al, 2010;Abdelaziz et al, 2015). Facial animations including lip information can effectively reflect the 2186 EC 33,7 manner of articulation.…”
Section: Introductionmentioning
confidence: 99%
“…Visual feedback has been extensively developed over the last decade in a diverse array of applications (Yu and Wang, 2015;Liu et al, 2014;Wu et al, 2010;Abdelaziz et al, 2015). Facial animations including lip information can effectively reflect the 2186 EC 33,7 manner of articulation.…”
Section: Introductionmentioning
confidence: 99%