2016
DOI: 10.18517/ijaseit.6.6.1252
|View full text |Cite
|
Sign up to set email alerts
|

A Sign Language to Text Converter Using Leap Motion

Abstract: This paper presents a prototype that can convert sign language into text. A Leap Motion controller was utilised as an interface for hand motion tracking without the need of wearing any external instruments. Three recognition techniques were employed to measure the performance of the prototype, namely the Geometric Template Matching, Artificial Neural Network and Cross Correlation. 26 alphabets from American Sign Language were chosen for training and testing the proposed prototype. The experimental results show… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2017
2017
2023
2023

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 10 publications
(6 citation statements)
references
References 6 publications
0
6
0
Order By: Relevance
“…Chuan et al [ 210 ] use k-NN and SVM based recognition strategies with recognition accuracies of 72.8% and 79.8%. Geometric Template Matching (GTM), ANN and Cross-Correlation (CC) are used by Khan et al [ 109 ] to create an SLR to text converter with accuracies of 52.6%, 44.8% and 35.9%. Using the difference between palm and the phalanges as features and ANN for recognition accuracy of 99.3% are reached, while using the length of the phalanges reaches 99%.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Chuan et al [ 210 ] use k-NN and SVM based recognition strategies with recognition accuracies of 72.8% and 79.8%. Geometric Template Matching (GTM), ANN and Cross-Correlation (CC) are used by Khan et al [ 109 ] to create an SLR to text converter with accuracies of 52.6%, 44.8% and 35.9%. Using the difference between palm and the phalanges as features and ANN for recognition accuracy of 99.3% are reached, while using the length of the phalanges reaches 99%.…”
Section: Methodsmentioning
confidence: 99%
“…General SLTC-applications are presented, for example, by Fok et al [ 107 ] or Kumar et al [ 108 ]. Country-specific recognition approaches are available amongst others for the American [ 109 ], Australian [ 110 ], Arabic [ 111 ], Greek [ 112 ], Indian [ 113 ] and Mexican [ 114 ] sign language. An evaluation of different potential solutions for recognition, translation and representation of sign language for e-learning platforms has been conducted by Martins et al [ 115 ].…”
Section: Applications and Contextsmentioning
confidence: 99%
“…The use of Leap Motion on sign language is also being investigated [22][23][24]. Chuan et al [22] investigated the recognition of English sign language using the 3D motion sensor, while Khan et al [23] researched the prototype that can convert sign language to text.…”
Section: Related Workmentioning
confidence: 99%
“…Chuan et al [22] investigated the recognition of English sign language using the 3D motion sensor, while Khan et al [23] researched the prototype that can convert sign language to text. Mohandes et al [24] investigated the Arabic sign language recognition.…”
Section: Related Workmentioning
confidence: 99%
“…Although Leap Motion has been adopted in previous works on sign language recognition, most of them focused on static gestures instead of dynamic gestures [14,15,16]. The study in [16] showed that the results in static alphabet recognition were low using the built-in algorithms, namely the Geometric Template Matching (GTM), Artificial Neural Network (ANN), and Cross-Correlation (CC). Where the highest average accuracy achieved was only 52.56% with the GTM algorithm.…”
Section: Introductionmentioning
confidence: 99%