2021 IEEE/CVF International Conference on Computer Vision (ICCV) 2021
DOI: 10.1109/iccv48922.2021.01547
|View full text |Cite
|
Sign up to set email alerts
|

H2O: A Benchmark for Visual Human-human Object Handover Analysis

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
10
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
3

Relationship

1
7

Authors

Journals

citations
Cited by 22 publications
(12 citation statements)
references
References 21 publications
0
10
0
Order By: Relevance
“…1). The marker-based datasets collect hand poses with the aid of hand-attached magnetic sensors [15,59,60] or reflective markers [52]. 2).…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…1). The marker-based datasets collect hand poses with the aid of hand-attached magnetic sensors [15,59,60] or reflective markers [52]. 2).…”
Section: Related Workmentioning
confidence: 99%
“…While the word "affordance" has different formulations in different tasks, in this paper, we denote "affordance" as the functionality of object. Since 2019, there have been at least 9 datasets of hand-object interaction released: ObMan [23], YCBAfford [11], HO3D [19], Con-tactPose [5], GRAB [52], DexYCB [10], two H2O [29,59] and DexMV [42]. However, these datasets lack comprehensive awareness of the object's affordance and the hand's interactions with it.…”
Section: Introductionmentioning
confidence: 99%
“…Our work is also related to recent efforts on standardizing the experimental setting and protocol for handovers. Ye et al [34] proposed a large-scale humanto-human handover dataset with object and hand pose annotations, and used it to study human grasp prediction. Rather than predicting human grasps, Chao et al [15] studied robot grasp generation for safe H2R handovers.…”
Section: Related Workmentioning
confidence: 99%
“…Also most givers in human handovers preferred a precision grasp irrespective of object. In [17], an image based data set focused on hand poses and grasps via visual analysis of handovers is described, where the hand poses are tracked by markers placed on each finger.…”
Section: Background and Related Workmentioning
confidence: 99%