2019 International Conference on Robotics and Automation (ICRA) 2019
DOI: 10.1109/icra.2019.8793912
|View full text |Cite
|
Sign up to set email alerts
|

MetaGrasp: Data Efficient Grasping by Affordance Interpreter Network

Abstract: Data-driven approach for grasping shows significant advance recently. But these approaches usually require much training data. To increase the efficiency of grasping data collection, this paper presents a novel grasp training system including the whole pipeline from data collection to model inference. The system can collect effective grasp sample with a corrective strategy assisted by antipodal grasp rule, and we design an affordance interpreter network to predict pixelwise grasp affordance map. We define gras… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
29
0

Year Published

2020
2020
2025
2025

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 35 publications
(29 citation statements)
references
References 33 publications
0
29
0
Order By: Relevance
“…The initial research on the gripping quality of robots was mostly based on force closure (Faverjon and Ponce, 1993) and shape closure (Diziolu and Lakshiminarayana, 1984) Jiang et al , (2011) uses a directional rectangle to represent the grasping configuration, including the position, direction and width of the gripper. After the rise of deep learning, Redmon and Angelova (2014); Morrison et al (2018); Cai et al (2019) proposed the method of using a neural network to get the representation of object-grabbing rectangle. However, the way of deep learning needs training in advance and the quality of grasping depends largely on the marked data sets.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…The initial research on the gripping quality of robots was mostly based on force closure (Faverjon and Ponce, 1993) and shape closure (Diziolu and Lakshiminarayana, 1984) Jiang et al , (2011) uses a directional rectangle to represent the grasping configuration, including the position, direction and width of the gripper. After the rise of deep learning, Redmon and Angelova (2014); Morrison et al (2018); Cai et al (2019) proposed the method of using a neural network to get the representation of object-grabbing rectangle. However, the way of deep learning needs training in advance and the quality of grasping depends largely on the marked data sets.…”
Section: Related Workmentioning
confidence: 99%
“…In recent years, deep learning network has made great achievements in computer vision and other tasks such as semantic segmentation (Kaiming et al , 2017) and object detection (Bochkovskiy et al , 2020). The deep convolution network is also applied to the recognition of robot grasping posture (Redmon and Angelova, 2014; Lenz et al , 2015; Morrison et al , 2018; Cai et al , 2019). However, the training of the network needs a large number of data sets and when the network is applied to the new data that has not appeared, it often brings some problems.…”
Section: Introductionmentioning
confidence: 99%
“…With the development of deep learning, a large number of methods apply object detection networks and depth sensor to obtain 3D bounding box of targets for locating the worksapce [10,11,12]. Recently, Cai et al [13] constructed an end-to-end network to esitimate the pixel level contact point of the target for perceiving the grasping location. However, above methods are limited to the preset location of workspace.…”
Section: A Related Workmentioning
confidence: 99%
“…Another pixel-wise affordance-based method, which provided not only grasping points, but general surfaces for robotic manipulation, was introduced in [36]. Lastly, authors in [37] proposed an affordance interpreter network to predict a pixelwise affordance map. Contrary to the above-mentioned methods, this map provided several regions with various data about grasping (stable horizontal grasp points, negative grasp locations, background, etc.).…”
Section: B Deep Convolutional Network For Direct Grasping Point Estimationmentioning
confidence: 99%