2016
DOI: 10.48550/arxiv.1610.05514
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Team Delft's Robot Winner of the Amazon Picking Challenge 2016

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
15
1

Year Published

2017
2017
2019
2019

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 9 publications
(16 citation statements)
references
References 0 publications
0
15
1
Order By: Relevance
“…Point-Cloud Registration (REG). We also compared with grasp planning based on point cloud registration, a state-ofthe-art method for using precomputed grasps [13,20]. We first coarsely estimated the object instance and pose based on the top 3 most similar synthetic images from Dex-Net 2.0, where similarity is measured as distance between AlexNet conv5 features [13,34].…”
Section: Grasp Planning Methods Used For Comparisonmentioning
confidence: 99%
See 1 more Smart Citation
“…Point-Cloud Registration (REG). We also compared with grasp planning based on point cloud registration, a state-ofthe-art method for using precomputed grasps [13,20]. We first coarsely estimated the object instance and pose based on the top 3 most similar synthetic images from Dex-Net 2.0, where similarity is measured as distance between AlexNet conv5 features [13,34].…”
Section: Grasp Planning Methods Used For Comparisonmentioning
confidence: 99%
“…To execute grasps on a physical robot, a common approach is to precompute a database of known 3D objects labeled with grasps and quality metrics such as GraspIt! [14].Precomputed grasps are indexed using point cloud registration: matching point clouds to known 3D object models in the database using visual and geometric similarity [4,5,7,13,20,22,27] and executing the highest quality grasp for the estimated object instance.…”
Section: Related Workmentioning
confidence: 99%
“…In 2015, Team RBO [6] won by pushing objects from the top or side until suction was achieved, and Team MIT [35] came in second place by suctioning on the centroid of objects with flat surfaces. In 2016, Team Delft [10] won the challenge by approaching the estimated object centroid along the inward surface normal. In 2017, Cartman [?]…”
Section: B Grasp Planningmentioning
confidence: 99%
“…While grasp planning searches for gripper configurations that maximize a quality metric derived from mechanical wrench space analysis [24], human labels [28], or selfsupervised labels [20], suction grasps are often planned directly on point clouds using heuristics such as grasping near the object centroid [10] or at the center of planar surfaces [4], [5]. These heuristics work well for prismatic objects such as boxes and cylinders but may fail on objects with non-planar surfaces near the object centroid, which is common for industrial parts and household objects such as staplers or children's toys.…”
Section: Introductionmentioning
confidence: 99%
“…accurate pose estimation for the entire scene. In this domain, solutions have been developed that use a Convolutional Neural Network (CNN) for object segmentation [2], [3] followed by a 3D model alignment step using point cloud registration techniques for pose estimation [4], [5]. The focus of the current paper is to improve this last step and increase the accuracy of pose estimation by reasoning at a scene-level about the physical interactions between objects.…”
Section: Introductionmentioning
confidence: 99%