2018 6th International Conference on Multimedia Computing and Systems (ICMCS) 2018
DOI: 10.1109/icmcs.2018.8525908
|View full text |Cite
|
Sign up to set email alerts
|

Recognition of Static Hand Gesture

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
5

Citation Types

0
4
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
6
2
1

Relationship

1
8

Authors

Journals

citations
Cited by 11 publications
(5 citation statements)
references
References 11 publications
0
4
0
Order By: Relevance
“…On the other hand, the deep learning approach predominantly relies on the utilization of deep neural networks for representation learning and classification. Sadeddine et al (2018) [6] performed hand posture recognition in three phases: hand detection, feature extraction, and classification. In the hand detection phase, Otsu thresholding and binary operations were applied to identify the hand posture region.…”
Section: Related Workmentioning
confidence: 99%
“…On the other hand, the deep learning approach predominantly relies on the utilization of deep neural networks for representation learning and classification. Sadeddine et al (2018) [6] performed hand posture recognition in three phases: hand detection, feature extraction, and classification. In the hand detection phase, Otsu thresholding and binary operations were applied to identify the hand posture region.…”
Section: Related Workmentioning
confidence: 99%
“…Sadeddine et al (2018) [2] proposed an implementation of hand posture recognition using several descriptors on three different databases, namely American Sign Language (ASL), Arabic Sign Language (ArSL) and the NUS Hand Posture dataset. The system architecture was categorized into three phases, namely hand detection, feature extraction and classification.…”
Section: Related Workmentioning
confidence: 99%
“…However, similar gestures such as "Dal" and "Thal" were misclassified which resulted in 93.5% accuracy. Lately, the authors in [26] used two different neural networks and four visual descriptors to address the sign language recognition problem. In particular, they used the 30 letters ArSL dataset in [24] in which all images have a solid background, and the hand is the only object within the image.…”
Section: Related Workmentioning
confidence: 99%
“…ArSL recognition systems that have been reported in the literature tackled the problem using different ways. Some works [20] [22][31] [26] bypassed the segmentation stage by restricting the input images to have a uniform background resulting in easier extraction of hand shape. Other approaches opted to use external equipments to aid correct capturing of the hand gesture, such as in [14][39], a Kinect sensor that captures the intensity and the depth of the images was employed.…”
Section: Related Workmentioning
confidence: 99%