2018 IEEE International Conference on Robotics and Automation (ICRA) 2018
DOI: 10.1109/icra.2018.8461213
|View full text |Cite
|
Sign up to set email alerts
|

Towards Globally Consistent Visual-Inertial Collaborative SLAM

Abstract: Motivated by the need for globally consistent tracking and mapping before autonomous robot navigation becomes realistically feasible, this paper presents a novel backend to monocular-inertial odometry. As some of the most challenging platforms for vision-based perception, we evaluate the performance of our system using Unmanned Aerial Vehicles (UAVs). Our experimental validation demonstrates that the proposed approach achieves drift correction and metric scale estimation from a single UAV on benchmarking datas… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
3
2

Relationship

1
4

Authors

Journals

citations
Cited by 8 publications
(9 citation statements)
references
References 21 publications
(39 reference statements)
0
9
0
Order By: Relevance
“…Existing techniques for perception in aerial manipulation are, in most cases, methods adapted from aerial vehicles without manipulation skills to provide positional accuracy and place recognition [28]. The AEROARMS I&M usecases require however, perception modules that go beyond the state of the art to: (1) accurately localize the vehicle, both during the navigation and manipulation phases; (2) localize (3) detect the objects involved in the I&M tasks and; (4) pick up or release the crawler by the aerial robot.…”
Section: Perceptionmentioning
confidence: 99%
“…Existing techniques for perception in aerial manipulation are, in most cases, methods adapted from aerial vehicles without manipulation skills to provide positional accuracy and place recognition [28]. The AEROARMS I&M usecases require however, perception modules that go beyond the state of the art to: (1) accurately localize the vehicle, both during the navigation and manipulation phases; (2) localize (3) detect the objects involved in the I&M tasks and; (4) pick up or release the crawler by the aerial robot.…”
Section: Perceptionmentioning
confidence: 99%
“…Alongside the emergence of vision based SLAM for single robots [2], [16], [18], [22], research into multi-robot systems has been attracting increasing attention recently. The collaborative frameworks proposed in [5] and [11] demonstrate global mapping from Keyframe (KF) data obtained from multiple UAVs. Other systems aim to distribute the parts within the SLAM estimation process across the agents and a cental server [13], [23], [25], promising to reduce the computational load for the agents and to make map data generated from each agent available to the rest of the robotic team.…”
Section: Related Workmentioning
confidence: 99%
“…The terms y k,j are the reprojection residuals as defined in Eq. ( 10) and (11) with the corresponding weights given by W k,j r = σ −2 obs • I 2×2 . The set of relative distance measurements with standard deviation σ d between the two agents is denoted by D, while the corresponding residual terms are given by…”
Section: E Optimization Back-endmentioning
confidence: 99%
“…The other research stream [34][35][36] is the overlap detectionbased collaborative positioning where the exteroceptive sensors, such as the 3D light detection and ranging (LiDAR) or camera, are employed to detect the overlapping area to further formulate the connection between agents. The work in [37] proposed a collaborative visual simultaneous localization and mapping (SLAM) framework to enhance the accuracy of state estimation of each robot equipped with a monocular camera.…”
Section: Introductionmentioning
confidence: 99%
“…However, the robustness and accuracy are limited simply based on the monocular camera. The work in [36,39] employs the IMU to help to estimate the scaling and perform VINS in each agent. A similar overlapping area detection scheme is employed to establish the inter-agent connections.…”
Section: Introductionmentioning
confidence: 99%