2008
DOI: 10.1016/j.imavis.2008.03.004
|View full text |Cite
|
Sign up to set email alerts
|

Hand gesture recognition and tracking based on distributed locally linear embedding

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
21
0
1

Year Published

2008
2008
2022
2022

Publication Types

Select...
5
3
1

Relationship

1
8

Authors

Journals

citations
Cited by 79 publications
(22 citation statements)
references
References 27 publications
0
21
0
1
Order By: Relevance
“…A methodology called neighborhood linear embedding (NLE) [10] has been developed to discover the intrinsic property of the input data which is an adaptive scheme without the trial and error process in LLE. We modify the LLE algorithm and propose a new DLLE to discover the inherent properties of the input data [13].…”
Section: Nonlinear Dimension Reduction (Ndr) Methods For Person Depenmentioning
confidence: 99%
“…A methodology called neighborhood linear embedding (NLE) [10] has been developed to discover the intrinsic property of the input data which is an adaptive scheme without the trial and error process in LLE. We modify the LLE algorithm and propose a new DLLE to discover the inherent properties of the input data [13].…”
Section: Nonlinear Dimension Reduction (Ndr) Methods For Person Depenmentioning
confidence: 99%
“…Extraction of data or features is a method to encode data with high accuracy. PCA is the main feature that extract by the system [15].…”
Section: Feature Reductionmentioning
confidence: 99%
“…Hand gesture recognition requires solving a number of challenging computer vision and pattern recognition tasks, including (i) human skin segmentation [19,20] to extract hand regions from color images, (ii) hand pose estimation [8], (iii) hand tracking [10], and (iv) hand motion analysis and recognition [28,41]. Among the methods for estimating a hand pose, there are solutions based on localizing hand landmarks [6,17,45,51,54], extracting hand shape features [36,37,56], or fitting the parameters of a 3D hand model [15,49,59].…”
Section: Overview Of Vision-based Gesture Recognitionmentioning
confidence: 99%