Robotics: Science and Systems IX 2013
DOI: 10.15607/rss.2013.ix.012
|View full text |Cite
|
Sign up to set email alerts
|

Deep Learning for Detecting Robotic Grasps

Abstract: Abstract-We consider the problem of detecting robotic grasps in an RGB-D view of a scene containing objects. In this work, we apply a deep learning approach to solve this problem, which avoids time-consuming hand-design of features. This presents two main challenges. First, we need to evaluate a huge number of candidate grasps. In order to make detection fast, as well as robust, we present a two-step cascaded structure with two deep networks, where the top detections from the first are re-evaluated by the seco… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
14
0

Year Published

2014
2014
2023
2023

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 82 publications
(14 citation statements)
references
References 15 publications
0
14
0
Order By: Relevance
“…Our robot always lifts the handle side of the screw driver without keeping it horizontal, as this grasp type doesn't have a sufficient wrench to grasp such objects whose CG is far away from the contacts. 'PS3 joystick' is a difficult object to grasp for us, which is also reported in [19]. There is only a little viable pinch grasps which can be used to lift the object.…”
Section: Object Modelling and Grasp Generationmentioning
confidence: 98%
“…Our robot always lifts the handle side of the screw driver without keeping it horizontal, as this grasp type doesn't have a sufficient wrench to grasp such objects whose CG is far away from the contacts. 'PS3 joystick' is a difficult object to grasp for us, which is also reported in [19]. There is only a little viable pinch grasps which can be used to lift the object.…”
Section: Object Modelling and Grasp Generationmentioning
confidence: 98%
“…Another line of research has focused on synthesizing grasps using statistical models [5] learned from a database of images [27] or point clouds [9], [15], [46] of objects annotated with grasps from human demonstrators [15], [27] or physical execution [15]. Kappler et al [18] created a database of over 700 object instances, each labelled with 500 Barrett hand grasps and their associated quality from human annotations and the results of simulations with the ODE physics engine.…”
Section: Related Workmentioning
confidence: 99%
“…Can analogous scaling effects emerge when datasets of 3D object models are applied to learning robust grasping and manipulation policies for robots? This question is being explored by others [12], [18], [27], [32], [33], and in this paper we present initial results using a new dataset of 3D models and grasp planning algorithm.…”
Section: Introductionmentioning
confidence: 99%
“…Modeling and predicting grasping interactions received special attention in robotics [Bohg et al 2013]. Data-driven techniques often rely on machine learning and train on annotated example shapes to predict graspable regions based on their geometric features [Saxena et al 2006;Saxena 2009;Lenz et al 2013]. Alternatively, one can use shape retrieval to find a similar object in an annotated database and transfer a grasping pose [Goldfeder and Allen 2011].…”
Section: Related Workmentioning
confidence: 99%