2011 IEEE International Conference on Robotics and Automation 2011
DOI: 10.1109/icra.2011.5980145
|View full text |Cite
|
Sign up to set email alerts
|

Efficient grasping from RGBD images: Learning using a new rectangle representation

Abstract: Abstract-Given an image and an aligned depth map of an object, our goal is to estimate the full 7-dimensional gripper configuration-its 3D location, 3D orientation and the gripper opening width. Recently, learning algorithms have been successfully applied to grasp novel objects-ones not seen by the robot before. While these approaches use low-dimensional representations such as a 'grasping point' or a 'pair of points' that are perhaps easier to learn, they only partly represent the gripper configuration and he… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

4
451
0
6

Year Published

2012
2012
2022
2022

Publication Types

Select...
4
2
2

Relationship

0
8

Authors

Journals

citations
Cited by 465 publications
(492 citation statements)
references
References 31 publications
4
451
0
6
Order By: Relevance
“…Previous works on visual-dependent robot grasping have shown promising results on learning grasping points from image-based 2D descriptors [18,26]. Other works exploit combinations of image-based and point cloud-based features [2,12]. Saxena et.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Previous works on visual-dependent robot grasping have shown promising results on learning grasping points from image-based 2D descriptors [18,26]. Other works exploit combinations of image-based and point cloud-based features [2,12]. Saxena et.…”
Section: Related Workmentioning
confidence: 99%
“…al. [12] extend this approach by computing grasping stability features from the point clouds. In their method, the point cloud features are linked to the gripper configuration, while the image-based features are linked to the visual graspability of a point.…”
Section: Related Workmentioning
confidence: 99%
“…A major drawback of such point defined grasps, however, was that it only determined where to grasp an object and it did not determine how wide the gripper had to be opened or the required orientation for the gripper to successfully grasp the object. As a way to overcome this limitation, another popular grasp representation that has been proposed is the oriented rectangle representation that was used in [10,[18][19][20]30,31]. According to Jiang et al [30], their grasping configuration has a seven-dimensional representation containing the information of a Grasping point, Grasping orientation, and Gripper opening width.…”
Section: Grasp Representationmentioning
confidence: 99%
“…As a way to overcome this limitation, another popular grasp representation that has been proposed is the oriented rectangle representation that was used in [10,[18][19][20]30,31]. According to Jiang et al [30], their grasping configuration has a seven-dimensional representation containing the information of a Grasping point, Grasping orientation, and Gripper opening width. In world coordinates, their grasp representation, G, is stated as G = (x, y, z, α, β, γ, l).…”
Section: Grasp Representationmentioning
confidence: 99%
“…These sensors have contributed to a wealth of research in a variety of domains including autonomous driving [17], 3D modelling [12], grasping and manipulation [21], [13] and object recognition [11], to only mention a few. The work presented here belongs to this overall effort attempting to exploit the richness of 3D data to build more accurate perception systems.…”
Section: Introductionmentioning
confidence: 99%