2013
DOI: 10.48550/arxiv.1301.3592
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Deep Learning for Detecting Robotic Grasps

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
12
0

Year Published

2015
2015
2023
2023

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 8 publications
(12 citation statements)
references
References 0 publications
0
12
0
Order By: Relevance
“…For example, [20] uses vision based features (edge and texture filter responses) and learns a logistic regressor over synthetic data. On the other hand, [22], [23] use human annotated grasp data to train grasp synthesis models over RGB-D data. However, as discussed above, large-scale collection of training data for the task of grasp prediction is not trivial and has several issues.…”
Section: Related Workmentioning
confidence: 99%
“…For example, [20] uses vision based features (edge and texture filter responses) and learns a logistic regressor over synthetic data. On the other hand, [22], [23] use human annotated grasp data to train grasp synthesis models over RGB-D data. However, as discussed above, large-scale collection of training data for the task of grasp prediction is not trivial and has several issues.…”
Section: Related Workmentioning
confidence: 99%
“…While some models directly estimate 6dof gripper poses from 3D inputs such as pointclouds, others estimate 2D gripper poses from depth or RGB images and project them to 3D space. The availability of standardised grasp datasets such as Cornell [60] and Jacquard [61] and its relative speed of detection has made the 2D input models a popular choice for application in robotic grasping. These 2D input models can also be categorised based on the type of outputs produced.…”
Section: Grasp Estimation With Convolutional Neural Networkmentioning
confidence: 99%
“…A comprehensive literature review of this area can be found in [17,18]. Recently, data-driven learning-based approaches have started to appear, with initial work focused on using human annotators [19]. However, in this work we are more interested in building a self-supervision system [20,21,22,23,24].…”
Section: Robotic Tasksmentioning
confidence: 99%