2021
DOI: 10.1007/s10846-020-01303-z
|View full text |Cite
|
Sign up to set email alerts
|

Towards Autonomous Robotic Assembly: Using Combined Visual and Tactile Sensing for Adaptive Task Execution

Abstract: Robotic assembly tasks are typically implemented in static settings in which parts are kept at fixed locations by making use of part holders. Very few works deal with the problem of moving parts in industrial assembly applications. However, having autonomous robots that are able to execute assembly tasks in dynamic environments could lead to more flexible facilities with reduced implementation efforts for individual products. In this paper, we present a general approach towards autonomous robotic assembly that… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
11
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 15 publications
(11 citation statements)
references
References 64 publications
(87 reference statements)
0
11
0
Order By: Relevance
“…State estimation has been applied to assembly tasks in prior works primarily using force data [5][6][7]. [8] and [9] fuse visual observations with force data to track part poses, but both methods rely on manually defined image features such as line or blob detectors. All these methods adopt particle filtering to estimate the hole position.…”
Section: B State Estimation For Assembly Tasksmentioning
confidence: 99%
See 1 more Smart Citation
“…State estimation has been applied to assembly tasks in prior works primarily using force data [5][6][7]. [8] and [9] fuse visual observations with force data to track part poses, but both methods rely on manually defined image features such as line or blob detectors. All these methods adopt particle filtering to estimate the hole position.…”
Section: B State Estimation For Assembly Tasksmentioning
confidence: 99%
“…State estimation methods seek to solve this problem by predicting the ground truth state of the environment from noisy observations [1][2][3][4]. A typical state representation is object poses [5][6][7][8][9], which are continuously estimated and then used to guide the robot's motion, e.g., for aligning the end-effector with a door knob. However, less attention has been paid to estimating high-level symbolic states, such as whether the door knob is locked or the door is fully shut.…”
Section: Introductionmentioning
confidence: 99%
“…This research includes dimension inspection [ 30 ], object recognition [ 31 ], and localization [ 29 ]. In the context of robot assembly, visual and tactile sensing has been used to continuously track assembly parts using multimodal fusion based on particle filters [ 32 ] and Bayesian state estimation [ 13 ]. We share the principal idea of combining data from visual and force-based sensing.…”
Section: Related Workmentioning
confidence: 99%
“…The evaluation of the proposed approach comprised two scenarios. Peg-in-hole assembly is chosen as the first use case because it reflects the typical complexity of industrial assembly tasks [ 13 ]. We show that it is possible to apply the method to other tasks by performing an evaluation of a car starter assembly task.…”
Section: Introductionmentioning
confidence: 99%
“…However, a pre-constructed object model is required, which is difficult for the previously mentioned operating object. In the field of robotic assembly, Korbinian et al [ 22 ] presented a framework for tracking visual and tactile information assembly to perform assembly operations for multiple hole types. Besides, some researchers have fused haptic and visual information using a Bayesian framework to achieve the estimation of target positions of assembled objects in industrial assembly [ 23 , 24 ].…”
Section: Introductionmentioning
confidence: 99%