2021
DOI: 10.1109/lra.2021.3056380
|View full text |Cite
|
Sign up to set email alerts
|

Unified Multi-Modal Landmark Tracking for Tightly Coupled Lidar-Visual-Inertial Odometry

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
49
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
4

Relationship

2
7

Authors

Journals

citations
Cited by 88 publications
(49 citation statements)
references
References 28 publications
0
49
0
Order By: Relevance
“…Zuo [14] introduced LIC-Fusion which tightly coupled lidar edge features, spares visual features, as well as plane features together by using the MSCKF framework [15]. David [16] presents a unified multi-sensor odometry that jointly optimizes 3D primitives such as lines and planes. However, since the measurements of these methods are highly tightly coupled, they are difficult to extend to other sensors.…”
Section: B Tightly Coupled Laser-visual-inertial Odometrymentioning
confidence: 99%
“…Zuo [14] introduced LIC-Fusion which tightly coupled lidar edge features, spares visual features, as well as plane features together by using the MSCKF framework [15]. David [16] presents a unified multi-sensor odometry that jointly optimizes 3D primitives such as lines and planes. However, since the measurements of these methods are highly tightly coupled, they are difficult to extend to other sensors.…”
Section: B Tightly Coupled Laser-visual-inertial Odometrymentioning
confidence: 99%
“…Wisth et al [10] used factor graphs to fuse visual information with inertial and kinematics. In [11], the same authors improved on the factor graph formulation by adding feature tracking-based lidar measurements.…”
Section: Kinematic-inertial State Estimation In Legged Robotsmentioning
confidence: 99%
“…This bounds computational time and achieves the same accuracy compared to using all available features; • Extensive experimental evaluation across a range of scenarios demonstrating superior robustness, particularly when VIO with an individual camera fails. The proposed algorithm, VILENS-MC, builds upon our previous VILENS estimation system [7], [8], by fusing multiple cameras and improving front-end feature processing.…”
Section: A Contributionmentioning
confidence: 99%
“…To generate ground truth, ICP was used to align the current LIDAR scan to detailed prior maps, collected using a commercial laser mapping system. The high frequency motion estimate from the IMU was used to carefully remove lidar motion distortion [7]. For an in-depth discussion on ground truth generation the reader is referred to [29].…”
Section: A Datasetsmentioning
confidence: 99%