2020
DOI: 10.1108/aa-06-2019-0101
|View full text |Cite
|
Sign up to set email alerts
|

A spatial information inference method for programming by demonstration of assembly tasks by integrating visual observation with CAD model

Abstract: Purpose In robot programming by demonstration (PbD) of small parts assembly tasks, the accuracy of parts poses estimated by vision-based techniques in demonstration stage is far from enough to ensure a successful execution. This paper aims to develop an inference method to improve the accuracy of poses and assembly relations between parts by integrating visual observation with computer-aided design (CAD) model. Design/methodology/approach In this paper, the authors propose a spatial information inference met… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(2 citation statements)
references
References 36 publications
0
2
0
Order By: Relevance
“…They proposed a spatial information inference method (called PAGC⋆) to improve the accuracy by integrating visual observation with CAD model. Relation (collinear and coplanar), distance and rotation similarity are used to match observations with CAD and the likelihood maximization estimation is used to infer the accurate poses and assembly relations based on the probabilistic assembly graph (Zhou et al , 2020).…”
Section: Related Workmentioning
confidence: 99%
“…They proposed a spatial information inference method (called PAGC⋆) to improve the accuracy by integrating visual observation with CAD model. Relation (collinear and coplanar), distance and rotation similarity are used to match observations with CAD and the likelihood maximization estimation is used to infer the accurate poses and assembly relations based on the probabilistic assembly graph (Zhou et al , 2020).…”
Section: Related Workmentioning
confidence: 99%
“…Cloud robotics generally have strong remote processing ability, which also puts forward certain requirements for the robot’s environmental adaptability. In the existing visual inspection process of robot grasping or assembling, most of the information on the desktop is obtained by a fixed-angle camera (Wu et al , 2020; Lin et al , 2019; Liu and Qiao, 2009; Zhou et al , 2020) or moving a camera mounted on the arm to a fixed position (Monica and Aleotti, 2020). Especially in some scenes that require human-computer interaction, the camera may be installed far away from the desktop to obtain the information around the robot (Wang et al , 2017).…”
Section: Introductionmentioning
confidence: 99%