2020 IEEE International Conference on Robotics and Automation (ICRA) 2020
DOI: 10.1109/icra40945.2020.9197256
|View full text |Cite
|
Sign up to set email alerts
|

Using Synthetic Data and Deep Networks to Recognize Primitive Shapes for Object Grasping

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
29
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
3
1

Relationship

2
7

Authors

Journals

citations
Cited by 36 publications
(29 citation statements)
references
References 40 publications
0
29
0
Order By: Relevance
“…1.0.3 Contribution. This study extends a previous conference version (Lin et al 2020) beyond proof-of-concept and focuses on the value of explicitly encoding shape information. It improves the dataset generation strategy and replaces the post-processing components downstream of shape segmentation with improved model-based approaches based on best practice.…”
Section: Introductionmentioning
confidence: 84%
“…1.0.3 Contribution. This study extends a previous conference version (Lin et al 2020) beyond proof-of-concept and focuses on the value of explicitly encoding shape information. It improves the dataset generation strategy and replaces the post-processing components downstream of shape segmentation with improved model-based approaches based on best practice.…”
Section: Introductionmentioning
confidence: 84%
“…A sufficient amount of data is required to achieve high-performance deep neural network, yet it is labor-intensive and expensive. Therefore, various previous studies have improved performance by using synthetic data when sufficient amounts of data are unavailable [44][45][46][47][48][49]. Thus, we generated 37,323 synthetic datasets and used them to achieve high performance.…”
Section: Discussionmentioning
confidence: 99%
“…The first type mainly employs geometric information to approximate the object or robot gripper for 3-D grasp detection. Requireing prior object knowledge for candidate grasp generation, model-based methods approximated object with box-based shapes (Huebner and Kragic 2008) or primitive surface shapes (Lin et al 2020). Once the shape is recognized, extracted, and estimated, the grasps naturally follow.…”
Section: D Grasp Predictionmentioning
confidence: 99%
“…Analysis of Outcomes Using the confidence interval of GKNet as a starting point for the comparison, only two methods score higher than the lower limit of the confidence interval (92.9%), Viereck et al (2017) and Lin et al (2020).…”
Section: Comparison Of Grasping Outcomes With Published Workmentioning
confidence: 99%