2009
DOI: 10.1109/lsp.2009.2016011
|View full text |Cite
|
Sign up to set email alerts
|

Lip Shape and Hand Position Fusion for Automatic Vowel Recognition in Cued Speech for French

Abstract: Abstract-Cued

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
20
0

Year Published

2010
2010
2023
2023

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 29 publications
(20 citation statements)
references
References 8 publications
0
20
0
Order By: Relevance
“…In previous studies (e.g., [4,5] the authors used a video processing technique based on blue color in order to track the hand positions and handshapes. In this study, landmarks with different colors were placed on the fingers resulting in a faster and more accurate image processing stage.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…In previous studies (e.g., [4,5] the authors used a video processing technique based on blue color in order to track the hand positions and handshapes. In this study, landmarks with different colors were placed on the fingers resulting in a faster and more accurate image processing stage.…”
Section: Methodsmentioning
confidence: 99%
“…Previously the authors presented vowel- [4], consonant- [5] and isolated word recognition [6] in Cued Speech for French based on HMMs. In the current study, continuous phoneme recognition is introduced using data from a deaf and normal-hearing cuer.…”
Section: Introductionmentioning
confidence: 99%
“…It is known that human lip readers rely heavily on context when lip reading and also have training tricks, which allow them to set a baseline for a new subject, such as asking them questions where the answers are either known or easily inferred. Heracleous et al showed that using the hand shapes from cued speech (where hand gestures are used to disambiguate vowels in spoken words for lip readers) improved the recognition rate of lip reading significantly [42]. They modelled the lip using some basic shape parameters, however it is also possible to track the lips, as shown by Ong and Bowden who use rigid flocks of linear predictors to track 34 points on the contour of the lips [81].…”
Section: Non-manual Featuresmentioning
confidence: 99%
“…Therefore, in 1967, Cornett de veloped the Cued Speech system as a supplement to lip read ing [2,3].…”
Section: Introductionmentioning
confidence: 99%