2018 IEEE International Conference on Robotics and Automation (ICRA) 2018
DOI: 10.1109/icra.2018.8460887
|View full text |Cite
|
Sign up to set email alerts
|

Dex-Net 3.0: Computing Robust Vacuum Suction Grasp Targets in Point Clouds Using a New Analytic Model and Deep Learning

Abstract: Vacuum-based end effectors are widely used in industry and are often preferred over parallel-jaw and multifinger grippers due to their ability to lift objects with a single point of contact. Suction grasp planners often target planar surfaces on point clouds near the estimated centroid of an object. In this paper, we propose a compliant suction contact model that computes the quality of the seal between the suction cup and local target surface and a measure of the ability of the suction grasp to resist an exte… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
195
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
4
4
1

Relationship

1
8

Authors

Journals

citations
Cited by 240 publications
(195 citation statements)
references
References 35 publications
(89 reference statements)
0
195
0
Order By: Relevance
“…Suction Grasping: For simulation experiments, grasp planning is done with a Dex-Net 1.0 suction grasping policy [27]. For physical experiments, suction cup grasps are planned with a Dex-Net 3.0 GQ-CNN [26], with maskbased constraints to plan grasps only on the goal object's segmentation mask. The GQ-CNN evaluates each candidate grasp and returns the grasp with the highest predicted quality and its associated quality metric.…”
Section: B Search Policymentioning
confidence: 99%
“…Suction Grasping: For simulation experiments, grasp planning is done with a Dex-Net 1.0 suction grasping policy [27]. For physical experiments, suction cup grasps are planned with a Dex-Net 3.0 GQ-CNN [26], with maskbased constraints to plan grasps only on the goal object's segmentation mask. The GQ-CNN evaluates each candidate grasp and returns the grasp with the highest predicted quality and its associated quality metric.…”
Section: B Search Policymentioning
confidence: 99%
“…Real-world demonstrations in various forms are then used to process these samples: filter them or train a machine learning algorithm to predict success. For example, Mahler et al [1], [3], [2], [4] execute those grasps with a robot and record success/failure. Song et al [29] learn a Bayes Net to jointly model post-grasp task and discretized hand pose.…”
Section: Related Workmentioning
confidence: 99%
“…Recent work on robotic grasping of household objects focuses on using large amounts of data collected by trying random grasping actions, often using parallel-jaw or suctioncup end effectors [1], [2], [3], [4]. This approach generates lots of self-supervised data that enables training of robust grasp policies.…”
Section: Introductionmentioning
confidence: 99%
“…Single-view images can be used for effective planning of grasping points for vacuum-based end effectors because only a single visible point of contact of suitable surface geometry is required [9]. Along with a greater number of fingers in a gripper, the estimation of grasping points becomes more difficult.…”
Section: A Related Workmentioning
confidence: 99%