2014
DOI: 10.1145/2601097.2601223
|View full text |Cite
|
Sign up to set email alerts
|

Learning to be a depth camera for close-range human capture and interaction

Abstract: We present a machine learning technique for estimating absolute, per-pixel depth using any conventional monocular 2D camera, with minor hardware modifications. Our approach targets close-range human capture and interaction where dense 3D estimation of hands and faces is desired. We use hybrid classification-regression forests to learn how to map from near infrared intensity images to absolute , metric depth in real-time. We demonstrate a variety of human-computer… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
48
0

Year Published

2015
2015
2018
2018

Publication Types

Select...
4
1
1

Relationship

1
5

Authors

Journals

citations
Cited by 62 publications
(48 citation statements)
references
References 39 publications
0
48
0
Order By: Relevance
“…Similarly to our work, Fanello et al [2] learn a direct mapping between infrared intensity and depth measurements, to predict per-pixel depth values. Relying on infrared intensity fall-off, this requires mounting of IR illuminants and an IR pass filter, and as such, renders the camera unusable for other purposes while also increasing power draw.…”
Section: Spatial Interaction For Wearable Computingmentioning
confidence: 90%
See 4 more Smart Citations
“…Similarly to our work, Fanello et al [2] learn a direct mapping between infrared intensity and depth measurements, to predict per-pixel depth values. Relying on infrared intensity fall-off, this requires mounting of IR illuminants and an IR pass filter, and as such, renders the camera unusable for other purposes while also increasing power draw.…”
Section: Spatial Interaction For Wearable Computingmentioning
confidence: 90%
“…Our method is similar to those proposed in [2,13]. In [13] several randomized decision forests (RFs) are combined to enable the recognition of discrete 2D hand shapes or gestures.…”
Section: Spatial Interaction For Wearable Computingmentioning
confidence: 99%
See 3 more Smart Citations