Proceedings of the 4th International Conference on PErvasive Technologies Related to Assistive Environments 2011
DOI: 10.1145/2141622.2141647
|View full text |Cite
|
Sign up to set email alerts
|

Comparing gesture recognition accuracy using color and depth information

Abstract: In human-computer interaction applications, gesture recognition has the potential to provide a natural way of communication between humans and machines. The technology is becoming mature enough to be widely available to the public and real-world computer vision applications start to emerge. A typical example of this trend is the gaming industry and the launch of Microsoft's new camera: the Kinect. Other domains, where gesture recognition is needed, include but are not limited to: sign language recognition, vir… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
54
0

Year Published

2013
2013
2021
2021

Publication Types

Select...
5
2
1

Relationship

1
7

Authors

Journals

citations
Cited by 84 publications
(54 citation statements)
references
References 23 publications
0
54
0
Order By: Relevance
“…Figures 5 and 6 show the results for the skeletal tracker and its comparison to the single hand detector, respectively. We can see in figure 6, for example, that 50% of the signs had a maximum pixel error of about 22 pixels or less when the comparison method of [4] was used to detect the hands. shows an example frame with good accuracy using the skeletal tracker on both hands in a two-handed sign.…”
Section: Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…Figures 5 and 6 show the results for the skeletal tracker and its comparison to the single hand detector, respectively. We can see in figure 6, for example, that 50% of the signs had a maximum pixel error of about 22 pixels or less when the comparison method of [4] was used to detect the hands. shows an example frame with good accuracy using the skeletal tracker on both hands in a two-handed sign.…”
Section: Resultsmentioning
confidence: 99%
“…This operation was performed on each frame of the signs, and the accuracy was calculated to serve as the benchmark for the evaluation of future methods. As an example comparison, we processed the one-handed signs with the single hand locator described in [4]-a method based on movement and depth alone-and calculated the results using the same pixel Euclidean distance similarity measure.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…For example, in the [9] literature, palm position in each frame is extracted and thus a trajectory curve is formed. The frame rate of Kinect is 30 frames per second, and if the dynamic sign language "graduation certificate" lasts for 2.13 second, then totally 64 frames and 64 corresponding palm points can be obtained.…”
Section: Trace Acquirementmentioning
confidence: 99%
“…Several methods have been suggested to detect the signs easily and accurately. Among the noteworthy methodologies used for sign recognition are Template Matching (Liu & Fujimura, 2004), Conditional Random Fields (CRF) (Saad et al, 2012) and Dynamic Time Warping (DTW) (Doliotis et al, 2011). Dynamic signs are the signs which rely on hands, head and body motion.…”
Section: Literature Reviewmentioning
confidence: 99%