2019 International Conference on Robotics and Automation (ICRA) 2019
DOI: 10.1109/icra.2019.8793830
|View full text |Cite
|
Sign up to set email alerts
|

Fast and Precise Detection of Object Grasping Positions with Eigenvalue Templates

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
3
1

Relationship

2
2

Authors

Journals

citations
Cited by 4 publications
(7 citation statements)
references
References 15 publications
0
7
0
Order By: Relevance
“…Suction grippers need planar surfaces close to the centroid of objects to grasp effectively [16]. Various approaches to estimate the grasp position have been developed, including those using only depth images [12], [17], or making use of point clouds [15], [16], and those using RGB images [8], [18].…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Suction grippers need planar surfaces close to the centroid of objects to grasp effectively [16]. Various approaches to estimate the grasp position have been developed, including those using only depth images [12], [17], or making use of point clouds [15], [16], and those using RGB images [8], [18].…”
Section: Related Workmentioning
confidence: 99%
“…The fast grasp evaluator (FGE) [17] is a template matching method for depth images that uses only the depth image and templates of the gripper to find the optimum grasping points. Its process flow shown in Fig.…”
Section: Fast Grasp Evaluatormentioning
confidence: 99%
“…Grasping point detection algorithms [15][16][17] are robust methods to find graspable points from cluttered scenes. Fast Graspability Evaluation (FGE) [17] is a practical-use method that finds graspable points by convolving the 2D gripper model and the parts surfaces on a depth map.…”
Section: Kitting Task Proceduresmentioning
confidence: 99%
“…Given an RGB-D image, the grasp configuration for a jaw gripper (Kumra and Kanan, 2017 ; Chu et al, 2018 ; Zhang et al, 2019 ) or a vacuum gripper (Araki et al, 2020 ; Jiang et al, 2020 ) can be directly predicted using a deep convolutional neural network (DCNN). Learning was extended from points to regions by Domae et al ( 2014 ) and Mano et al ( 2019 ), who proposed a convolution-based method in which the hand shape mask is convolved with the depth mask to obtain the region of the grasp points. Matsumura et al ( 2019 ) later learned the peak among all regions for different hand orientations to detect a grasp point capable of avoiding multiple objects.…”
Section: Introductionmentioning
confidence: 99%