2010 International Conference on Computational Intelligence and Communication Networks 2010
DOI: 10.1109/cicn.2010.115
|View full text |Cite
|
Sign up to set email alerts
|

Performance Analysis of Lip Synchronization Using LPC, MFCC and PLP Speech Parameters

Abstract: Many multimedia applications and entertainment industry products like games, cartoons and film dubbing require speech driven face animation and audio-video synchronization. Only Automatic Speech Recognition system (ASR) does not give good results in noisy environment. Audio Visual Speech Recognition system plays vital role in such harsh environment as it uses both -audio and visual -information. In this paper, we have proposed a novel approach with enhanced performance over traditional methods that have been r… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
14
0

Year Published

2012
2012
2024
2024

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 13 publications
(14 citation statements)
references
References 9 publications
0
14
0
Order By: Relevance
“…We have used three layer feed forward back propagation neural network (FFBPNN) with input, hidden and output layer. Two layers FFBP is perhaps the best choice for classification [30]. In our experiment, for binary features, we have used 9, 6 and 1 neurons in input, hidden and output layer respectively.…”
Section: Resultsmentioning
confidence: 99%
“…We have used three layer feed forward back propagation neural network (FFBPNN) with input, hidden and output layer. Two layers FFBP is perhaps the best choice for classification [30]. In our experiment, for binary features, we have used 9, 6 and 1 neurons in input, hidden and output layer respectively.…”
Section: Resultsmentioning
confidence: 99%
“…In Automatic Dubbing, lip movement synchronization refers to aligning and synchronizing the lip movements of a character with the corresponding dubbed audio track [1]. The primary objective of lip movement synchronization is to ensure a seamless lip-syncing effect by closely matching the lip movements of the character with the spoken words [2], [3].…”
Section: A Lip Movement Synchronizationmentioning
confidence: 99%
“…Mel Frequency Cepstral Coefficients (MFCC) are quiet well applied features in most of the speech identification applications as well as speakers. In 1980s, considerable attempts has been done to develop MFCC [3], [4]. While employing MFCC in the development of applications, many problems such as use of frequency estimation algorithms, design of the efficient filter banks and the total selected features in extracting the speech and its dynamics all play a significant role in the performance of speech recognition systems.…”
Section: Related Workmentioning
confidence: 99%