2013 IEEE International Conference on Computer Vision 2013
DOI: 10.1109/iccv.2013.305
|View full text |Cite
|
Sign up to set email alerts
|

Interactive Markerless Articulated Hand Motion Tracking Using RGB and Depth Data

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
207
2

Year Published

2014
2014
2019
2019

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 211 publications
(209 citation statements)
references
References 19 publications
0
207
2
Order By: Relevance
“…However, recognizing complex hand gestures is clearly challenging from such 2D input [6]. The recent advancements in processing power and the emergence of consumer grade depth cameras, however, have enabled a number of high fidelity gestural interactive systems in HCI [10,12,21] and fine-grained 3D hand-pose estimation in real-time [19,30,35]. The current state-of-the art can be broken down into methods relying on model-fitting and temporal tracking [30,35], and those leveraging per-pixel hand part classification [19,38].…”
Section: Related Workmentioning
confidence: 99%
“…However, recognizing complex hand gestures is clearly challenging from such 2D input [6]. The recent advancements in processing power and the emergence of consumer grade depth cameras, however, have enabled a number of high fidelity gestural interactive systems in HCI [10,12,21] and fine-grained 3D hand-pose estimation in real-time [19,30,35]. The current state-of-the art can be broken down into methods relying on model-fitting and temporal tracking [30,35], and those leveraging per-pixel hand part classification [19,38].…”
Section: Related Workmentioning
confidence: 99%
“…We evaluated our method on the MPI Dexter 1 hand dataset [78] in order to validate whether it can be used for matching similar motions. The dataset consists of seven sequences of hand motions of a single actor.…”
Section: Matching Similar Motionsmentioning
confidence: 99%
“…For each set, we quote the number of samples, classes and subjects, provided that they are explicitly given in the database description. It may be noticed that besides our two hand gesture recognition (HGR) sets (marked as bold), there are only two data sets (ColorTip [52] and Dexter 1 [50]) with landmarks localization annotated. Moreover, in these two cases, the metadata include only the fingertips locations.…”
Section: Data Setsmentioning
confidence: 99%