2020
DOI: 10.1007/s00170-020-05257-2
|View full text |Cite
|
Sign up to set email alerts
|

Grasping pose estimation for SCARA robot based on deep learning of point cloud

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
20
0
1

Year Published

2021
2021
2024
2024

Publication Types

Select...
7
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 39 publications
(25 citation statements)
references
References 16 publications
0
20
0
1
Order By: Relevance
“…The embeddings from GPNet and SRNet are combined to refine the detected grasp. PointNetRGPE (Wang Z. et al, 2020 ) first predicts the corresponded class number from object point cloud data, which is used to fuse with point coordinates to pass into grasping pose estimation network. The network has three sub-networks based on PointNet to acquire the translation, rotation and rotation sign of grasp pose.…”
Section: End-to-end and Othersmentioning
confidence: 99%
“…The embeddings from GPNet and SRNet are combined to refine the detected grasp. PointNetRGPE (Wang Z. et al, 2020 ) first predicts the corresponded class number from object point cloud data, which is used to fuse with point coordinates to pass into grasping pose estimation network. The network has three sub-networks based on PointNet to acquire the translation, rotation and rotation sign of grasp pose.…”
Section: End-to-end and Othersmentioning
confidence: 99%
“…After the screw locking is completed, the end effector returns to the initial position [14,15]. At this time, the assembly line continues to move forward and the next laptop to be assembled moves to the assembly position [16,17]. e specific parameters are shown in Table 1.…”
Section: Spatial Trajectorymentioning
confidence: 99%
“…2 (b). In those tasks, the 3D vision sensors can be utilized to capture the point cloud data (PCD) and estimate the pose of parts through the iterative closest point (ICP) algorithm [14] or using neural networks [11,13,28].…”
Section: Introductionmentioning
confidence: 99%
“…Fig. 2 (a) demonstrates some assembly tasks such as surface mount assembly [28,29] and peg-in-hole [25] tasks which only require pose compensation of 3-dimensional (3D) pose errors. The 3D pose error includes the position errors along the x-axis and y-axis and the angle error around the normal of the plane.…”
Section: Introductionmentioning
confidence: 99%