2013 IEEE Conference on Computer Vision and Pattern Recognition 2013
DOI: 10.1109/cvpr.2013.458
|View full text |Cite
|
Sign up to set email alerts
|

Pixel-Level Hand Detection in Ego-centric Videos

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
250
0

Year Published

2015
2015
2020
2020

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 210 publications
(250 citation statements)
references
References 20 publications
0
250
0
Order By: Relevance
“…Many pixel-based hand segmentation methods have been proposed in the literature [13,14,15,16,17,18,19,20,21,22,23,24]. The feature descriptor used to perform the detection is a critical factor.…”
Section: Hand Detectionmentioning
confidence: 99%
See 2 more Smart Citations
“…Many pixel-based hand segmentation methods have been proposed in the literature [13,14,15,16,17,18,19,20,21,22,23,24]. The feature descriptor used to perform the detection is a critical factor.…”
Section: Hand Detectionmentioning
confidence: 99%
“…These techniques may be computationally prohibitive for real-time applications, a problem that is exacerbated by the limited computational resources of mobile and wearable devices. Researchers have also explored combining color with additional features such as texture [14,16,18,23]). In [14], a generic pixel-level hand detector based on a combination of color, texture, and gradient histogram features is trained using over 600 manually labeled hand images (over 200 million labeled pixels) acquired under various illumination conditions and backgrounds, and has shown to outperform several baseline approaches.…”
Section: Hand Detectionmentioning
confidence: 99%
See 1 more Smart Citation
“…1. We first detect hands at the pixel level with [19], computing a hand probability value for each pixel on the basis of the color and texture of a local surrounding image patch. The output probabilities are averaged over recent frames to remove noise.…”
Section: Combined Motion and Shape Approachmentioning
confidence: 99%
“…Baradi et al [18] applied dense trajectories to gesture recognition in the first-person views. They introduced pixel-level hand detection [19] to reduce tracking keypoints and improved both accuracy and performance. Although they demonstrated that dense trajectories can be applied for gesture recognition in first-person view scenarios, they used only local features from dense trajectories.…”
Section: Introductionmentioning
confidence: 99%