2021 IEEE Conference of Russian Young Researchers in Electrical and Electronic Engineering (ElConRus) 2021
DOI: 10.1109/elconrus51938.2021.9396496
|View full text |Cite
|
Sign up to set email alerts
|

Leap Motion based Myanmar Sign Language Recognition using Machine Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
2
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 16 publications
(4 citation statements)
references
References 4 publications
0
2
0
Order By: Relevance
“…Using a vision-based approach we can reduce the cost by only the trained dataset and model to recognize the sign with only captured devices such as a webcam or by using a droidcam (a third-party mobile camera to a webcam app) to capture from a device. Used method(s) Accuracy (%) [13] Cyber gloves and HMMs 95 [16] Leap motion devices 90.82 [20] Flex sensors 90.34 (precision score) [21] Data gloves and 3D position trackers 91.9 [22] One-handed glove-based system and HMMs 94 LSTM 94.3 Proposed work GRU 76 CNN 89.07…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…Using a vision-based approach we can reduce the cost by only the trained dataset and model to recognize the sign with only captured devices such as a webcam or by using a droidcam (a third-party mobile camera to a webcam app) to capture from a device. Used method(s) Accuracy (%) [13] Cyber gloves and HMMs 95 [16] Leap motion devices 90.82 [20] Flex sensors 90.34 (precision score) [21] Data gloves and 3D position trackers 91.9 [22] One-handed glove-based system and HMMs 94 LSTM 94.3 Proposed work GRU 76 CNN 89.07…”
Section: Resultsmentioning
confidence: 99%
“…Hein et al [16] implemented their way with two segments: a training segment and a classification segment. First, a webcam input video was taken for the training part.…”
Section: Literature Reviewmentioning
confidence: 99%
“…By altering the dataset, the system can be used for several sign languages. Zaw Hein, Thet Paing Htoo, Bawin Aye, Sai Myo Htet, Kyaw Zaw Ye, et al [15], suggested Language Recognition in two sections: a section for categorization and a section for training. First, we used a camera to record the input video for the training section.…”
Section: Related Workmentioning
confidence: 99%
“…Subsequent detection of the IF signal allows the extraction of motion information for the target. Because of its small antenna size and contactless operation capability, FMCW radar finds applications in various scenarios, including autonomous driving [8], sign language recognition [9][10][11][12], home automation [13,14], and many other fields. Therefore, FMCW radar plays a crucial role in human-machine interaction.…”
Section: Introductionmentioning
confidence: 99%