2012
DOI: 10.1007/s00779-012-0615-1
|View full text |Cite
|
Sign up to set email alerts
|

Non-manual cues in automatic sign language recognition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
6
0

Year Published

2012
2012
2023
2023

Publication Types

Select...
5
4

Relationship

1
8

Authors

Journals

citations
Cited by 23 publications
(6 citation statements)
references
References 27 publications
0
6
0
Order By: Relevance
“…Most of the previous researches have focused on individual modalities, such as the face ( [4], [5]), head pose [6], mouth [7]- [9], eye-gaze [10] and body pose ( [11], [12]), where the features can be classified as, manual features (intentional expressions when performing a sign such as hand gesture and body movement) and, non-manual features (un-intentional expressions when performing a sign such as lip movement and eye gaze). Researchers have mainly attended on using the manual features for SLR [13]- [15], and have ignored the important and rich information International Journal on Advances in ICT for Emerging Regions in the non-manual features.…”
Section: Multi-modalitymentioning
confidence: 99%
“…Most of the previous researches have focused on individual modalities, such as the face ( [4], [5]), head pose [6], mouth [7]- [9], eye-gaze [10] and body pose ( [11], [12]), where the features can be classified as, manual features (intentional expressions when performing a sign such as hand gesture and body movement) and, non-manual features (un-intentional expressions when performing a sign such as lip movement and eye gaze). Researchers have mainly attended on using the manual features for SLR [13]- [15], and have ignored the important and rich information International Journal on Advances in ICT for Emerging Regions in the non-manual features.…”
Section: Multi-modalitymentioning
confidence: 99%
“…For the doubt expression case, we found that the distance and angle features presented small changes between cases where labels were positive or negative, mainly due to the type of expression which is characterized by a slight contraction of the eyes and mouth; thus, we included two extra features for the left eyebrow (90,91,92,93,94,21,22,23,24,25), mouth (48,49,50,51,52,53,54,55,56,57,58,59), and left eye (0, 1, 2, 3, 4, 5, 6, 7), which are the enclosed area [66] and the principal axes ratio or eccentricity [67], which allowed us to better identify the characteristic patterns of this expression. Additionally, for the doubt expression case, we did not concatenate the features that belong to each frame in a window.…”
Section: Feature Extraction and Perspective Constructionmentioning
confidence: 99%
“…GFE has been used in data-driven machine learning approaches for various sign languages, such as American (ASL) [21][22][23], German [24], Czech [25], and Turkish [26], among others. In the literature, it has been proposed that a classifier should learn to recognize syntactic-type GFE in "Libras" sign language (Brazilian sign language) using a vector of features composed of distance, angles and deep points, extracted from the points of the contour of the face (which were captured by a deep camera) [9].…”
Section: Introductionmentioning
confidence: 99%
“…Classification was most often carried out using: hidden Markov models (HMM), e.g., [30][31][32], artificial neural networks (ANN), e.g., [33][34][35], dynamic time warping (DTW), e.g., [27,36], and other methods. Vision methods allow natural interaction and inclusion of non-manual features [37,38], but they are dependent on lighting conditions, background colors, and the user's clothing. Therefore, such solutions work only in controlled laboratory conditions.…”
Section: Recent Workmentioning
confidence: 99%