Seventh IEEE International Symposium on Wearable Computers, 2003. Proceedings.
DOI: 10.1109/iswc.2003.1241392
|View full text |Cite
|
Sign up to set email alerts
|

Using multiple sensors for mobile sign language recognition

Abstract: We build upon a constrained, lab-based Sign Language recognition system with the goal of making it a mobile assistive technology. We examine using multiple sensors for disambiguation of noisy data to improve recognition accuracy. Our experiment compares the results of training a small gesture vocabulary using noisy vision data, accelerometer data and both data sets combined.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
81
0
3

Publication Types

Select...
6
3
1

Relationship

0
10

Authors

Journals

citations
Cited by 116 publications
(85 citation statements)
references
References 13 publications
0
81
0
3
Order By: Relevance
“…From the literature survey, we understand that the models proposed for sign language recognition address the problem either at finger spelling level [20,24,26,27] or at word level [3,9,23,30,32]. Since signs used by hearing impaired people are very abstract, the sign language recognition based on fingerspelling or word seems to be cumbersome and not effective.…”
Section: Related Workmentioning
confidence: 99%
“…From the literature survey, we understand that the models proposed for sign language recognition address the problem either at finger spelling level [20,24,26,27] or at word level [3,9,23,30,32]. Since signs used by hearing impaired people are very abstract, the sign language recognition based on fingerspelling or word seems to be cumbersome and not effective.…”
Section: Related Workmentioning
confidence: 99%
“…Another novel approach to sign language data acquisition was taken by Brashear et al [18] where features from both a hat mounted camera and accelerometer data were used to classifying signs Figure 2.8. While wearable computing approaches to data acquisition can extract accurate features representing the signs being performed, some of these approaches require that the signer wears cumbersome devices which can hinder the ease and naturalness of signing.…”
Section: Fig26: Sensing Glove With Six Accelerometers and A Basic Smentioning
confidence: 99%
“…As we incorporate these findings into our ASL generation software, we will conduct additional evaluation studies with native ASL signers to evaluate the 3D animations that result and guide the development of our technology. Earlier researchers have collected video-based sign language corpora [2] [5] [6] [7] [16] or collected samples of sign language using motion-capture [1] [4] [20]. For this project, we are collecting a corpus of ASL using a motion-capture suit and gloves, and we are recording multisentence passages in which signers associate entities with locations in space.…”
Section: Broader Research Objectives and Progressmentioning
confidence: 99%