2020
DOI: 10.1007/s42486-020-00040-4
|View full text |Cite
|
Sign up to set email alerts
|

Accurate and robust odometry by fusing monocular visual, inertial, and wheel encoder

Abstract: Tracking the pose of a robot has been gaining importance in the field of Robotics, e.g., paving the way for robot navigation. In recent years, monocular visual-inertial odometry (VIO) is widely used to do the pose estimation due to its good performance and low cost. However, VIO cannot estimate the scale or orientation accurately when robots move along straight lines or circular arcs on the ground. To address the problem, in this paper we take the wheel encoder into account, which can provide us with stable tr… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(1 citation statement)
references
References 32 publications
0
1
0
Order By: Relevance
“…In recent years, SLAM technology has shifted towards multi-sensor fusion, progressing from visual-inertial fusion to lidar-inertial fusion and ultimately fusing lidar, visual, inertial sensors, wheel encoders, and GPS data. Notable advancements include VINS-Mono [9], VINS-Fusion [10], and OpenVINS [11] for visual-inertial fusion, LIC-Fusion [12] and VIL-SLAM [13] for lidar-visual-inertial fusion, [14] and [15] for enhanced fusion with wheel encoders, VIWO [16] for sliding-window filtering to fuse multi-modal data, and [17] for introducing wheel encoder pre-integration theory and noise propagation formula, enabling tight integration with sensor data. By amalgamating data from multiple sensors, these approaches significantly enhance simultaneous localization and mapping robustness and accuracy.…”
Section: B Methods Of Multi-sensor Fusionmentioning
confidence: 99%
“…In recent years, SLAM technology has shifted towards multi-sensor fusion, progressing from visual-inertial fusion to lidar-inertial fusion and ultimately fusing lidar, visual, inertial sensors, wheel encoders, and GPS data. Notable advancements include VINS-Mono [9], VINS-Fusion [10], and OpenVINS [11] for visual-inertial fusion, LIC-Fusion [12] and VIL-SLAM [13] for lidar-visual-inertial fusion, [14] and [15] for enhanced fusion with wheel encoders, VIWO [16] for sliding-window filtering to fuse multi-modal data, and [17] for introducing wheel encoder pre-integration theory and noise propagation formula, enabling tight integration with sensor data. By amalgamating data from multiple sensors, these approaches significantly enhance simultaneous localization and mapping robustness and accuracy.…”
Section: B Methods Of Multi-sensor Fusionmentioning
confidence: 99%