2021
DOI: 10.1109/lra.2021.3115406
|View full text |Cite
|
Sign up to set email alerts
|

SuctionNet-1Billion: A Large-Scale Benchmark for Suction Grasping

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
32
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 34 publications
(36 citation statements)
references
References 18 publications
0
32
0
Order By: Relevance
“…They further proposed GQ-CNN to learn the grasp quality and used a sampling-based method to propose an optimal grasp in the inference phase, and also extended their study by proposing a fully convolutional GQ-CNN (Satish et al, 2019 ) to infer pixel-wise grasp quality, which achieved faster grasping. Recently, (Cao et al, 2021 ) used an auto-encoder–decoder to infer the grasp quality, which was labeled by a similar contact model to that used in GQ-CNN, to generate the suction pose. However, the accuracy of the contact model depends on the model complexity and parameter tuning.…”
Section: Related Workmentioning
confidence: 99%
“…They further proposed GQ-CNN to learn the grasp quality and used a sampling-based method to propose an optimal grasp in the inference phase, and also extended their study by proposing a fully convolutional GQ-CNN (Satish et al, 2019 ) to infer pixel-wise grasp quality, which achieved faster grasping. Recently, (Cao et al, 2021 ) used an auto-encoder–decoder to infer the grasp quality, which was labeled by a similar contact model to that used in GQ-CNN, to generate the suction pose. However, the accuracy of the contact model depends on the model complexity and parameter tuning.…”
Section: Related Workmentioning
confidence: 99%
“…The advancement of affordable consumer-grade and precise 3D scanner hardware (SHINING 3D EinScan-SP) allows to generate custom 3D models for individual use-cases. For our work we chose a subset of 33 high-quality meshes from [3] being part of YCB object set [13], 4 from [15], scanned 36 objects by ourselves and remodeled 9 in CAD software when scanning was not possible. Obtaining accurate 3D models for all objects is challenging and time consuming.…”
Section: A Custom Object Dataset and Novel Object Testsetmentioning
confidence: 99%
“…Finding reliable grasps is therefore not limited to reason about the physical interaction between gripper and object, but also challenges the system to understand the arrangement of objects in the scene and identify an appropriate picking sequence. While many researches have provided excellent datasets on each individual part such as SynPick [1] for pose estimation and gripper-object interaction, REGRAD [2] for relationship reasoning, SuctionNet-1Billion [3] for vacuum grasping or Dex-Net 4.0 [4] for ambidextrous grasping, the problems were only addressed from one side or did not target automation. Because robotic picking is a multi-stage complex problem, solving the problem from one aspect will miss the details of other aspects.…”
Section: Introductionmentioning
confidence: 99%
See 2 more Smart Citations