2014
DOI: 10.1007/978-3-319-10590-1_19
|View full text |Cite
|
Sign up to set email alerts
|

Read My Lips: Continuous Signer Independent Weakly Supervised Viseme Recognition

Abstract: Abstract. This work presents a framework to recognise signer independent mouthings in continuous sign language, with no manual annotations needed. Mouthings represent lip-movements that correspond to pronunciations of words or parts of them during signing. Research on sign language recognition has focused extensively on the hands as features. But sign language is multi-modal and a full understanding particularly with respect to its lexical variety, language idioms and grammatical structures is not possible wit… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
21
0

Year Published

2015
2015
2020
2020

Publication Types

Select...
3
2
2

Relationship

2
5

Authors

Journals

citations
Cited by 16 publications
(21 citation statements)
references
References 27 publications
0
21
0
Order By: Relevance
“…Another approach uses Expectation Maximisation (EM) [7] to fit a model to data observations. Koller et al [20] used EM to fit a Gaussian Mixture Model (GMM) to Active Appearance Model (AAM) mouth features in order to find and model mouth shape sequences in sign language. Other works use EM to link text and image regions [37].…”
Section: State-of-the-artmentioning
confidence: 99%
“…Another approach uses Expectation Maximisation (EM) [7] to fit a model to data observations. Koller et al [20] used EM to fit a Gaussian Mixture Model (GMM) to Active Appearance Model (AAM) mouth features in order to find and model mouth shape sequences in sign language. Other works use EM to link text and image regions [37].…”
Section: State-of-the-artmentioning
confidence: 99%
“…This clustering is motivated by the vision of using this method for facial expression animation in avatar-based SL synthesis. Koller et al [33], [34], utilize the same corpus (RWTH-Phoenix-Weather) as well as a similar approach of extracting high-level facial features to model mouthings in SL. In [33], the authors develop a novel viseme recognition method that is specifically designed for SL, does not require any manual annotation and is signer-independent.…”
Section: Mouth Non-manuals In Existing Aslr Systemsmentioning
confidence: 99%
“…Koller et al [33], [34], utilize the same corpus (RWTH-Phoenix-Weather) as well as a similar approach of extracting high-level facial features to model mouthings in SL. In [33], the authors develop a novel viseme recognition method that is specifically designed for SL, does not require any manual annotation and is signer-independent. In [34], they propose an algorithm that automatically annotates mouthings in SL videos.…”
Section: Mouth Non-manuals In Existing Aslr Systemsmentioning
confidence: 99%
See 2 more Smart Citations