2021
DOI: 10.1108/ir-02-2021-0043
|View full text |Cite
|
Sign up to set email alerts
|

Industrial robot programming by demonstration using stereoscopic vision and inertial sensing

Abstract: Purpose The purpose of this paper is to present a programming by demonstration (PbD) system based on 3D stereoscopic vision and inertial sensing that provides a cost-effective pose tracking system, even during error-prone situations, such as camera occlusions. Design/methodology/approach The proposed PbD system is based on the 6D Mimic innovative solution, whose six degrees of freedom marker hardware had to be revised and restructured to accommodate an IMU sensor. Additionally, a new software pipeline was de… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
2
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5

Relationship

0
5

Authors

Journals

citations
Cited by 6 publications
(3 citation statements)
references
References 31 publications
0
2
0
Order By: Relevance
“…Object grasping and manipulation rely on computer vision and reinforced learning (He et al , 2019; de Souza et al , 2022). Object release is based on a priori models and real-time information.…”
Section: Technologiesmentioning
confidence: 99%
“…Object grasping and manipulation rely on computer vision and reinforced learning (He et al , 2019; de Souza et al , 2022). Object release is based on a priori models and real-time information.…”
Section: Technologiesmentioning
confidence: 99%
“…The majority of visual tracking systems only employ monocular vision, which is prone to high false detection rates due to the absence of depth information in two-dimensional images [19]. To overcome this limitation, stereoscopic vision has been proposed as a solution, which utilizes two cameras to provide depth information, leading to a reduction in the number of false detections [20]. However, the use of stereoscopic vision comes with an increased computational cost, which may restrict its use in smaller platforms.…”
Section: Introductionmentioning
confidence: 99%
“…wearable gloves; Zhu et al , 2020) or markers (e.g. motion capture; Mueller et al , 2019; Ferreira et al , 2016 ; de Souza et al , 2021a) to the demonstrator and/or the interacting objects. Instead of attaching sensors, the vision-based imitation approach without the extra effort of interactive training and the need for any special equipment is considered a more natural and low-cost approach.…”
Section: Introductionmentioning
confidence: 99%