2019 Chinese Control and Decision Conference (CCDC) 2019
DOI: 10.1109/ccdc.2019.8833328
|View full text |Cite
|
Sign up to set email alerts
|

Real-time Hand Gesture Recognition Based on Deep Learning in Complex Environments

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
3
1

Relationship

0
7

Authors

Journals

citations
Cited by 7 publications
(4 citation statements)
references
References 7 publications
0
4
0
Order By: Relevance
“…in 2019. The purpose of this research is to design and develop a real-time hand gesture recognition method from video streams with Deep Learning CNN algorithm in a complex environment with accuracy of 96% [2]. In addition, there is research conducted by Abdullah Muhajid et al in 2021 which discusses hand gesture recognition for help disabilities in communicating by proposing a lightweight model based on YOLO (You Only Look Once) v3 and CNN for gesture recognition and to determine the accuracy of the algorithm that is 97.68% [3].…”
Section: Related Studymentioning
confidence: 99%
See 1 more Smart Citation
“…in 2019. The purpose of this research is to design and develop a real-time hand gesture recognition method from video streams with Deep Learning CNN algorithm in a complex environment with accuracy of 96% [2]. In addition, there is research conducted by Abdullah Muhajid et al in 2021 which discusses hand gesture recognition for help disabilities in communicating by proposing a lightweight model based on YOLO (You Only Look Once) v3 and CNN for gesture recognition and to determine the accuracy of the algorithm that is 97.68% [3].…”
Section: Related Studymentioning
confidence: 99%
“…Technology that involves interaction between humans and computers has increased widely and rapidly with various methods and techniques to overcome various problems in human life [4]. Human-computer interaction methods mainly include voice interaction and gesture interaction [2] because gestures contain a lot of information that can convey semantics, emotions and conform to human daily life habits [13]. The gesture can be in the form of a hand gesture that allows us to express our thoughts clearly in our everyday interactions [4].…”
Section: Introductionmentioning
confidence: 99%
“…Audio and visual-based input modalities implemented using sensors were developed as well, such as speech recognition based on the audio signal acquired with a microphone (e.g., [ 63 , 90 , 91 ]), facial expression recognition based on processing visual data from the camera [ 92 ], human body posture recognition using data from the conventional gray level or color camera, thermal infrared sensor, depth sensor, smart vision sensor [ 66 ], user-movement recognition using Kinect [ 72 ], gesture recognition based on data from depth sensor [ 93 ] and USB camera on a helmet [ 94 ], emotion recognition with a laptop camera [ 91 ], and so on. The Kinect can also be used to implement a solution for contact-free stress recognition, where the Kinect can provide respiration signals under different breathing patterns [ 95 ].…”
Section: Backgrounds and Related Workmentioning
confidence: 99%
“…Hand gestures are a critically important form of non-verbal communication. The interpretation of hand gestures with wearable sensors [1], [2], or cameras [3], [4] aims to transform the hand gestures into meaningful instructions; this interaction is also known as hand gesture recognition. The field of hand gesture recognition has seen significant improvements over the past few years [5] and, most recently, bundled with the latest advancements in computer vision, has encouraged the development of new technologies to support rehabilitation [6], [7], robot control, and home automation [8].…”
Section: Introductionmentioning
confidence: 99%