2016
DOI: 10.1007/s41060-016-0008-z
|View full text |Cite
|
Sign up to set email alerts
|

Recent methods in vision-based hand gesture recognition

Abstract: The goal of static hand gesture recognition is to classify the given hand gesture data represented by some features into some predefined finite number of gesture classes. The main objective of this effort is to explore the utility of two feature extraction methods, namely, hand contour and complex moments to solve the hand gesture recognition problem by identifying the primary advantages and disadvantages of each method. Artificial neural network is built for the purpose of classification by using the back-pro… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2

Citation Types

0
3
0

Year Published

2017
2017
2023
2023

Publication Types

Select...
4
4

Relationship

0
8

Authors

Journals

citations
Cited by 29 publications
(7 citation statements)
references
References 27 publications
0
3
0
Order By: Relevance
“…In view of this, researchers have begun utilising deep learning approaches such as CNNs [ 1 , 2 , 3 , 4 , 6 , 17 , 27 , 28 , 29 , 30 , 31 , 32 ] and ANNs [ 33 ] over conventional hand-crafted methods. Deep learning approaches have the ability to automatically discover complex and important features through their hidden layers, saving time and reducing bias during feature extraction.…”
Section: Related Workmentioning
confidence: 99%
“…In view of this, researchers have begun utilising deep learning approaches such as CNNs [ 1 , 2 , 3 , 4 , 6 , 17 , 27 , 28 , 29 , 30 , 31 , 32 ] and ANNs [ 33 ] over conventional hand-crafted methods. Deep learning approaches have the ability to automatically discover complex and important features through their hidden layers, saving time and reducing bias during feature extraction.…”
Section: Related Workmentioning
confidence: 99%
“…In contrast, deep learning approaches are capable of automatically learning high-level representations of the data through multiple layers of abstraction, reducing the need for manual feature engineering and improving performance in complex tasks. Consequently, researchers shifted towards employing deep learning approaches such as CNNs [1][2][3][4][5][6][22][23][24][25][26][27], artificial neural networks (ANNs) [30], and autoencoders [31,32] instead of traditional hand-crafted methods. Tan et al [3] proposed a CNN model with spatial pyramid pooling (CNN-SPP), utilizing SPP instead of max pooling or average pooling to capture more spatial information and facilitate better learning.…”
Section: Related Workmentioning
confidence: 99%
“…These extracted characteristics are subsequently input into a classification algorithm for the purpose of gesture categorization. Alternatively, the deep learning approach utilizes deep learning networks, such as convolutional neural networks (CNNs) [1][2][3][4][5][6][22][23][24][25][26][27][28][29], artificial neural networks (ANNs) [30], and autoencoders [31,32], to automatically extract features from the hand gestures. Deep learning networks showcase adaptability to various challenges in static hand gesture recognition, learning from extensive datasets and accommodating diverse environmental factors such as lighting conditions, complex backgrounds, and variations in hand size and skin tone.…”
Section: Introductionmentioning
confidence: 99%
“…Based on their methods, hand gesture recognition technology is mainly divided into two categories: data gloves-based method and computer vision-based method. The computer vision-based method can be classified into two main classes [2]: static gestures [3][4][5][6][7] that are done with a hand or both hands with no movement, and dynamic gestures [8,9,2,10,11,12] that are done with a sequence of hand images following a path or a predefined behaviour. Both static and dynamic gestures are extracted from a sequence of hand images (video), but in the first, the gesture has to be the same and to be in the same position, in the second one, gesture has to be a little different in the form or in the position.…”
Section: Introductionmentioning
confidence: 99%