2015
DOI: 10.1007/978-3-319-16199-0_52
|View full text |Cite
|
Sign up to set email alerts
|

Egocentric Object Recognition Leveraging the 3D Shape of the Grasping Hand

Abstract: Abstract. We present a systematic study on the relationship between the 3D shape of a hand that is about to grasp an object and recognition of the object to be grasped. In this paper, we investigate the direction from the shape of the hand to object recognition for unimpaired users. Our work shows that the 3D shape of a grasping hand from an egocentric point of view can help improve recognition of the objects being grasped. Previous work has attempted to exploit hand interactions or gaze information in the ego… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2015
2015
2023
2023

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(5 citation statements)
references
References 27 publications
0
5
0
Order By: Relevance
“…Another well-studied problem is hand pose estimation in 3D vision [9, 31], with some work starting to consider first-person data. For instance, Lin et al [22] show that the 3D shape of a grasping hand can help improve recognition of the objects being grasped. We also study pose in the context of interacting with (and thus grasping) objects, but we do not require depth information.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Another well-studied problem is hand pose estimation in 3D vision [9, 31], with some work starting to consider first-person data. For instance, Lin et al [22] show that the 3D shape of a grasping hand can help improve recognition of the objects being grasped. We also study pose in the context of interacting with (and thus grasping) objects, but we do not require depth information.…”
Section: Related Workmentioning
confidence: 99%
“…While simply detecting hands may be sufficient for some applications, pixelwise segmentation is often more useful, especially for applications like hand pose recognition and in-hand object detection [22]. Once we have accurately localized hands using the above approach, segmentation is relatively straightforward, as we show in this section.…”
Section: Segmenting Handsmentioning
confidence: 99%
“…Thus, the intrinsic characteristics of egocentric vision allowed developing automated methods to study and recognize different grasp types, saving a huge amount of manual labor. Although in most cases the hand grasp analysis has been addressed in a supervised manner (grasp recognition -Section 4.1.1) [68], [87], [118], [119], [120], [121], some authors proposed to tackle this problem using clustering approaches, in order to discover dominant modes of hand-object interaction and identify high-level relationships among clusters (grasp clustering and abstraction -Section 4.1.2) [87], [111], [118], [122].…”
Section: Hand Grasp Analysis and Gesture Recognitionmentioning
confidence: 99%
“…With development of portable depth sensors, some work that has been done with RGBD images shows promising results in the field of egocentric object recognition [12] [24] and activity recognition [16] [20]. Rogez et al [18] [19] focused on the task of 3D hand pose detection and trained the detector with depth information using synthetic training exemplars.…”
Section: Previous Workmentioning
confidence: 99%
“…[16] [24] further showed that foreground segmentation is efficient enough to separate the hand and handled object area by combining RGB image with depth data. Lin et al [12] built a generic skin-tone model with histogram of oriented normal vectors (HONV) features that works well with 3D point cloud. Useful as the depth information is, current portable RGB-D sensors, like stereo cameras and cameras with structured light, have drawbacks on large power consumption.…”
Section: Previous Workmentioning
confidence: 99%