2021 IEEE/CVF International Conference on Computer Vision (ICCV) 2021
DOI: 10.1109/iccv48922.2021.01566
|View full text |Cite
|
Sign up to set email alerts
|

Graspness Discovery in Clutters for Fast and Accurate Grasp Detection

Abstract: Efficient and robust grasp pose detection is vital for robotic manipulation. For general 6 DoF grasping, conventional methods treat all points in a scene equally and usually adopt uniform sampling to select grasp candidates. However, we discover that ignoring where to grasp greatly harms the speed and accuracy of current grasp pose detection methods. In this paper, we propose "graspness", a quality based on geometry cues that distinguishes graspable areas in cluttered scenes. A look-ahead searching method is p… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
21
0

Year Published

2022
2022
2025
2025

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 53 publications
(21 citation statements)
references
References 42 publications
0
21
0
Order By: Relevance
“…However, setting up a real-world TAMP system often requires substantial task-specific knowledge and accurate 3D models of the environment, significantly limiting the environments to which the system can generalize. To address this challenge, recent work has adopted deep learning-based approaches for robotic manipulation, for instance, on grasp planning [44,47,48,62,65], motion planning [7,57], and reasoning about spatial relations [20,36,49].…”
Section: Related Workmentioning
confidence: 99%
“…However, setting up a real-world TAMP system often requires substantial task-specific knowledge and accurate 3D models of the environment, significantly limiting the environments to which the system can generalize. To address this challenge, recent work has adopted deep learning-based approaches for robotic manipulation, for instance, on grasp planning [44,47,48,62,65], motion planning [7,57], and reasoning about spatial relations [20,36,49].…”
Section: Related Workmentioning
confidence: 99%
“…For example, U-Net [209] has been widely used for grasp map synthesis [29,142,219,220]. Such an encoder-decoder architecture is widely used to synthesize pixel-wise grasps [11,28,116,239,255]. Another similar formulation for pixel-wise grasp synthesis is called grasp manifolds, proposed by [88].…”
Section: Pixel-level Grasp Map Synthesismentioning
confidence: 99%
“…Another similar formulation for pixel-wise grasp synthesis is called grasp manifolds, proposed by [88]. Since grasp map is more informative and could provide a global grasp affordance which indicates the grasp quality of the current viewpoints, it enables selection of the best view [116,239], provided the assumption that the camera is not fixed, which holds in most cases for robots. It is defined by a close set of points on objects representing graspable areas.…”
Section: Pixel-level Grasp Map Synthesismentioning
confidence: 99%
“…I. The data-driven grasping method through predicting explicit grasp in the cluttered environment [3], [6], [7], [9], [11] usually follows the steps: 1) get the RGBD (or depth only) measurement at a global position which can view all objects in the fixed workspace 2) generate grasp candidates and choose one by utilizing a trained model 3) execute the open-loop pick and place motion with a target grasp configuration which is obtained in the previous step. Specifically, the method in [3] follows the steps mentioned above with a grasp prediction model, which is trained with a real large-scale dataset, while [11] is with a synthetic dataset.…”
Section: Related Workmentioning
confidence: 99%
“…View Training Source GG-CNN-cl [1] real MVP [4] real Dex-Net [8] syn [3], [6], [9] real [7], [10], [11] syn GPD [12] real QT-Opt [13] real Song et al [14] real GraspPF (Ours) both The additional input of grasp rotation makes the network not restricted to a predefined rotation set, which is prevalent in prior works [3], [6], [7]. Additionally, it is computationally efficient enough to run in a closed-loop manner during the approach, making the resulting implementation reactive.…”
Section: Introductionmentioning
confidence: 99%