2018 IEEE International Conference on Robotics and Automation (ICRA) 2018
DOI: 10.1109/icra.2018.8461044
|View full text |Cite
|
Sign up to set email alerts
|

Robotic Pick-and-Place of Novel Objects in Clutter with Multi-Affordance Grasping and Cross-Domain Image Matching

Abstract: This paper presents a robotic pick-and-place system that is capable of grasping and recognizing both known and novel objects in cluttered environments. The key new feature of the system is that it handles a wide range of object categories without needing any task-specific training data for novel objects. To achieve this, it first uses a categoryagnostic affordance prediction algorithm to select among four different grasping primitive behaviors. It then recognizes picked objects with a cross-domain image classi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

3
395
0
1

Year Published

2018
2018
2020
2020

Publication Types

Select...
4
4

Relationship

0
8

Authors

Journals

citations
Cited by 441 publications
(399 citation statements)
references
References 34 publications
(46 reference statements)
3
395
0
1
Order By: Relevance
“…They are (1) the base layer to act as a support and cover for the suction chamber; (2) the spacer layer to provide gap between the suction holes and the base layer, thus creating the suction chamber; (3) the suction layer with the suction holes; and (4) enclosure layer to provide airtight contact establishment with the object. The suction layer had four holes of diameter 5 mm each to maximize the suction effect [21]. The spacer layer had a small hole through which a soft tube of 1 mm diameter was inserted.…”
Section: Soft Enhancement Layersmentioning
confidence: 99%
See 1 more Smart Citation
“…They are (1) the base layer to act as a support and cover for the suction chamber; (2) the spacer layer to provide gap between the suction holes and the base layer, thus creating the suction chamber; (3) the suction layer with the suction holes; and (4) enclosure layer to provide airtight contact establishment with the object. The suction layer had four holes of diameter 5 mm each to maximize the suction effect [21]. The spacer layer had a small hole through which a soft tube of 1 mm diameter was inserted.…”
Section: Soft Enhancement Layersmentioning
confidence: 99%
“…Due to the suction cup, the gripper can handle a wide range of delicate and irregular shaped objects without damaging them. In the Amazon Robotics Challenge, the teams that won the first prize in 2017 and 2015 took up this hybrid approach [21,22]. These teams had a rigid parallel-jaw gripper with an additional suction cup.…”
Section: Introductionmentioning
confidence: 99%
“…Unlike two-stage methods, a one-stage method [28][29][30][31][32][33][34][35][36][37], namely, one-shot grasp detection, directly regresses grasp points and their classes without object segmentation or pose estimation. This method is preferable for object picking in a warehouse for two reasons.…”
Section: Introductionmentioning
confidence: 99%
“…Levine et al [29] were among the first to incorporate a convolutional neural network (CNN) model that directly predicts the probability of success of a given motor command in visually guided grasp planning. Douglas et al [36] proposed Generative Grasping Convolutional Neural Networks (GG-CNN) to predict the quality and pose of grasps at every pixel with fast prediction speeds of 19 ms. Zeng et al [37] directly predicted the affordance map of four primitive grasp actions based on RGB-D images. However, these studies required large datasets.…”
Section: Introductionmentioning
confidence: 99%
“…One way of tackling the few-shot recognition problem is learning how to match newly-incoming objects to their most similar support example [12,13]. These approaches are based on Convolutional Neural Networks (CNN) and typically reuse the knowledge acquired on large-scale benchmark datasets, such as ImageNet [14], to compensate for the lack of training examples.…”
Section: Introductionmentioning
confidence: 99%