2020
DOI: 10.48550/arxiv.2006.01797
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Object-Independent Human-to-Robot Handovers using Real Time Robotic Vision

Abstract: We present an approach for safe and objectindependent human-to-robot handovers using real time robotic vision and manipulation. We aim for general applicability with a generic object detector, a fast grasp selection algorithm and by using a single gripper-mounted RGB-D camera, hence not relying on external sensors. The robot is controlled via visual servoing towards the object of interest. Putting a high emphasis on safety, we use two perception modules: human body part segmentation and hand/finger segmentatio… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
11
0

Year Published

2020
2020
2021
2021

Publication Types

Select...
3

Relationship

2
1

Authors

Journals

citations
Cited by 3 publications
(11 citation statements)
references
References 24 publications
0
11
0
Order By: Relevance
“…Human body keypoints is commonly used in robotics applications, such as for ensuring safety for Human-Robot Interaction [6], [12] and recognizing gestures such as a pointing gestures [13].…”
Section: B Body Keypoints Detectionmentioning
confidence: 99%
See 1 more Smart Citation
“…Human body keypoints is commonly used in robotics applications, such as for ensuring safety for Human-Robot Interaction [6], [12] and recognizing gestures such as a pointing gestures [13].…”
Section: B Body Keypoints Detectionmentioning
confidence: 99%
“…Human-to-Robot handovers have been demonstrated on a physical robot system by a handful of researchers [1]- [6]. Even though successful demonstrations has been shown, in these works the robot is programmed for a single task: object handovers.…”
Section: Introductionmentioning
confidence: 99%
“…Among the surveyed papers that involved real human-robot interactions, the majority focused on robot-tohuman (R2H) handovers, while 11 involved human-to-robot (H2R) handovers. Most papers (26 out of 38) involved a single object being handed over; only one paper had more than six objects (i.e., Rosenberg et al with 13 objects [5]). Hence, despite the great progress made in the field, there is a gap in generalization to larger sets of objects, or ultimately, arbitrary objects.…”
Section: Introductionmentioning
confidence: 99%
“…Konstantinova et al aim to address the challenge of handing arbitrary objects, based on a method that relies only on wrist force sensors (i.e., no vision or tactile information), but still requires the person to bring the object in contact with the robot gripper [18]. Most closely related to this paper is recent work by Rosenberg et al who developed a method for grasping objects that can be recognized by the robot from a human giver's hand [5]. Their object detector is based on the YOLO V3 object detector [19] trained on 80 object categories from the COCO [20] dataset and they generate grasps using a modified GG-CNN [21].…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation