2018 IEEE International Conference on Robotics and Automation (ICRA) 2018
DOI: 10.1109/icra.2018.8460664
|View full text |Cite
|
Sign up to set email alerts
|

A Benchmark Comparison of Monocular Visual-Inertial Odometry Algorithms for Flying Robots

Abstract: Flying robots require a combination of accuracy and low latency in their state estimation in order to achieve stable and robust flight. However, due to the power and payload constraints of aerial platforms, state estimation algorithms must provide these qualities under the computational constraints of embedded hardware. Cameras and inertial measurement units (IMUs) satisfy these power and payload constraints, so visualinertial odometry (VIO) algorithms are popular choices for state estimation in these scenario… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
241
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
7
2
1

Relationship

2
8

Authors

Journals

citations
Cited by 338 publications
(243 citation statements)
references
References 31 publications
2
241
0
Order By: Relevance
“…On a static track (Figure 5a), a SLAMbased state estimator [5], [11] would have less drift than a VIO baseline, but we empirically found the latency of existing open-source SLAM pipelines to be too high for closed-loop control. A benchmark comparison of latencies of monocular visual-inertial SLAM algorithms for flying robots can be found in [50].…”
Section: B Experiments In Simulationmentioning
confidence: 99%
“…On a static track (Figure 5a), a SLAMbased state estimator [5], [11] would have less drift than a VIO baseline, but we empirically found the latency of existing open-source SLAM pipelines to be too high for closed-loop control. A benchmark comparison of latencies of monocular visual-inertial SLAM algorithms for flying robots can be found in [50].…”
Section: B Experiments In Simulationmentioning
confidence: 99%
“…The state-of-the-art EVIO system, UltimateSLAM [43], operates by independently tracking visual features from pseudo-images reconstructed from events and optional images from a conventional camera, and fusing the tracks with inertial measurements using an existing optimization backend [23]. Here, we go one step further and directly apply an off-the-shelf VIO system (specifically, VINS-Mono [37], which is state-of-the-art [11]) to videos reconstructed from events using either our approach, MR, or HF, and evaluate against UltimateSLAM. As is standard [55,39,43], we use sequences from the Event Camera Dataset [31], which contain events, frames, and IMU measurements from a DAVIS240C [7] sensor.…”
Section: Visual-inertial Odometrymentioning
confidence: 99%
“…We use VINS-Mono [11], a tightly-coupled visual-inertial odometry framework that has been shown to perform favorably when compared to other state of the art open source state estimation algorithms [2]. VINS-Mono jointly optimizes vehicle motion, feature locations, camera-IMU extrinsics, and IMU biases over a sliding window of monocular images and preintegrated IMU measurements.…”
Section: State Estimationmentioning
confidence: 99%