2019 IEEE/CVF International Conference on Computer Vision (ICCV) 2019
DOI: 10.1109/iccv.2019.00299
|View full text |Cite
|
Sign up to set email alerts
|

6-DOF GraspNet: Variational Grasp Generation for Object Manipulation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

6
370
1
8

Year Published

2019
2019
2022
2022

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 450 publications
(411 citation statements)
references
References 17 publications
6
370
1
8
Order By: Relevance
“…Recent methods leverage data from 3D object reconstruction [54,22,43]. Grasp-Net [45] formulates the problem using a variational autoencoder which, given an input point cloud generates several grasp hypotheses, later refined by a second network. This Figure 2: Grasp affordance annotations.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Recent methods leverage data from 3D object reconstruction [54,22,43]. Grasp-Net [45] formulates the problem using a variational autoencoder which, given an input point cloud generates several grasp hypotheses, later refined by a second network. This Figure 2: Grasp affordance annotations.…”
Section: Related Workmentioning
confidence: 99%
“…Note that this is a significantly more complex problem than that of generating robotic grasps, as robot end-effectors have much less DoF than the human hand. For instance, very recently, Mousavian et al [45] introduced GraspNet to predict 6-DoF for object manipulation. Furthermore, the common practice in robotics is to use RGB-D cameras which, despite simplifying the process of modeling the geometry of the objects, do not have the versatility of standard RGB cameras.…”
Section: Introductionmentioning
confidence: 99%
“…poking). Most datadriven grasping algorithms at present can perform grasping detection in 6-degrees of freedom (6-DoF) with either closedloop feedback, which only utilizes top-down grasps in simple tabletop settings [74] [75], or open-loop feedback [76], [77].…”
Section: A Graspingmentioning
confidence: 99%
“…Along this line, methods predict the success of a proposed grasp by training a traditional classifier (Jiang et al, 2011 ; Fischinger et al, 2015 ) or deep neural network (Saxena et al, 2008 ; Lenz et al, 2015 ; Redmon and Angelova, 2015 ; Pinto and Gupta, 2016 ; Kumra and Kanan, 2017 ; Wang et al, 2017 ). Alternatively, grasp simulation or analytical grasp metrics are computed for objects in model databases to generate training data (Johns et al, 2016 ; Mahler et al, 2016 , 2017 ; ten Pas et al, 2017 ; Cai et al, 2019 ; Liang et al, 2019 ; Mousavian et al, 2019 ). The task is then to learn a model that can predict the value of the grasp metric given a proposal and then select the grasp that is most likely to succeed.…”
Section: Related Workmentioning
confidence: 99%