2015
DOI: 10.1007/s11263-015-0826-9
|View full text |Cite
|
Sign up to set email alerts
|

Estimate Hand Poses Efficiently from Single Depth Images

Abstract: This paper aims to tackle the practically very challenging problem of efficient and accurate hand pose estimation from single depth images. A dedicated two-step regression forest pipeline is proposed: given an input hand depth image, step one involves mainly estimation of 3D location and in-plane rotation of the hand using a pixelwise regression forest. This is utilized in step two which delivers final hand estimation by a similar regression forest model based on the entire hand image patch. Moreover, our esti… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
28
0

Year Published

2017
2017
2021
2021

Publication Types

Select...
4
3
1

Relationship

1
7

Authors

Journals

citations
Cited by 27 publications
(28 citation statements)
references
References 32 publications
0
28
0
Order By: Relevance
“…The 19 ASL hand-actions are air, alphabets, bank, bus, gallon, high school, how much, ketchup, lab, leg, lady, quiz, refrigerator, several, sink, stepmother, teaspoon, throw, xray. b) Atomic Hand Action Detection: To detect atomiclevel hand-actions, we make use of the existing hand pose estimation system [30] with a postprocessing step to map joint location prediction outputs to the bent/straight states of fingers. To evaluate performance of the interval-level atomic action detection results, we follow the common practice and use the intersection-over-union of intervals with a 50% threshold to identify a hit/false alarm/missing, respectively.…”
Section: B Our Complex Hand Activity Dataset A) Data Collectionmentioning
confidence: 99%
“…The 19 ASL hand-actions are air, alphabets, bank, bus, gallon, high school, how much, ketchup, lab, leg, lady, quiz, refrigerator, several, sink, stepmother, teaspoon, throw, xray. b) Atomic Hand Action Detection: To detect atomiclevel hand-actions, we make use of the existing hand pose estimation system [30] with a postprocessing step to map joint location prediction outputs to the bent/straight states of fingers. To evaluate performance of the interval-level atomic action detection results, we follow the common practice and use the intersection-over-union of intervals with a 50% threshold to identify a hit/false alarm/missing, respectively.…”
Section: B Our Complex Hand Activity Dataset A) Data Collectionmentioning
confidence: 99%
“…The recent introduction of commodity depth cameras has led to significant progress in analyzing articulated objects, especially human full-body and hand. In terms of pose estimation, Microsoft Kinect is already widely used in practice at the scale of human full-body, while it is still a research topic at human hand scale [45,30,52,31,43,42,54], partly due to the dexterous nature of hand articulations. [45] is among the first to develop a dedicated convolutional neural net (CNN) method for hand pose estimation, which is followed by [30].…”
Section: Related Workmentioning
confidence: 99%
“…[54] further considers to incorporate geometry information in hand modelling by embedding a non-linear generative process within a deep learning framework. [52] studies and evaluates a theoretically motivated random forest method for hand pose estimation. A hier- Figure 1: A cartoon illustration of our main idea: An articulated object can be considered as a point in certain manifold.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…There are some other existing real hand pose datasets from frontal camera view i.e. Dexter [21], SHREC-2017 1 , MSRA-2014 [16], ASTAR [31]. However, these datasets either contain small number of original images, missing depth information, few ground truth joint positions or many outliers in the annotations.…”
Section: Hand Pose Datasets Based On Real Depth Datamentioning
confidence: 99%