2013 International Conference on Information Communication and Embedded Systems (ICICES) 2013
DOI: 10.1109/icices.2013.6508395
|View full text |Cite
|
Sign up to set email alerts
|

Vision-based sign language translation device

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
15
0

Year Published

2014
2014
2023
2023

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 37 publications
(15 citation statements)
references
References 3 publications
0
15
0
Order By: Relevance
“…A sign language (also signed language or simply signing) is a language which uses manual communication and body language to convey meaning, as opposed to acoustically conveyed sound patterns [8]. This can involve simultaneously combining hand shapes, orientation and movement of the hands, arms or body, and facial expressions to fluidly express a speaker's thoughts.…”
Section: Sign Recognitionmentioning
confidence: 99%
See 2 more Smart Citations
“…A sign language (also signed language or simply signing) is a language which uses manual communication and body language to convey meaning, as opposed to acoustically conveyed sound patterns [8]. This can involve simultaneously combining hand shapes, orientation and movement of the hands, arms or body, and facial expressions to fluidly express a speaker's thoughts.…”
Section: Sign Recognitionmentioning
confidence: 99%
“…Static gestures represent the alphabets and numeral of a natural language which are arbitrary signs representing specific concepts. Dynamic gestures include the words, sentences, expressions or finger spellings which have the signs representing varying concepts [8]. The first group is the foundation of sign language introduced by the pose of hands called postures, whereas the second group signs include motion of hands, head or both.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Cui et al in [7] proposed a CNN with temporal convolution and pooling for spatiotemporal representation learning from a video, and an RNN with a bidirectional Long Short Term Memory (LSTM) for the mapping of feature sequences to sequences of annotations. Madhuri et al in [18], present a real-time vision-based system for recognizing finger spelling continuous Sign Language (SL) using a single camera to track user's unadorned hands. The goal is to help hearing or speech impaired people to communicate with people who do not know SL.…”
Section: From Pose To Gesturementioning
confidence: 99%
“…The goal is to help hearing or speech impaired people to communicate with people who do not know SL. Although facial expressions add relevant information to the emotional aspect of the sign, in [18], they are not considered since their analysis complicates the problem. They only focus on translating a one-handed sign of representation of alphabets (A-Z) and numbers (0-9).…”
Section: From Pose To Gesturementioning
confidence: 99%