2021 IEEE International Conference on Robotics and Automation (ICRA) 2021
DOI: 10.1109/icra48506.2021.9561996
|View full text |Cite
|
Sign up to set email alerts
|

LVI-SAM: Tightly-coupled Lidar-Visual-Inertial Odometry via Smoothing and Mapping

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
70
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 177 publications
(70 citation statements)
references
References 20 publications
0
70
0
Order By: Relevance
“…Focusing on LiDAR SLAM, similarly to LOAM and its variants [11], [12], methods like LIO-SAM [13], LIO-MAPPING [14], HDL-SLAM [3] also estimate the robot poses and a 3D map, but with option of integrating additional sensor modalities such as IMU and GPS. Other methods such as like LIMO [15], LIRO [16] and LVI-SLAM [17] fuse visual and LiDAR measurements for simultaneous localization and mapping. Although these systems demonstrated significant progress in robustness and accuracy in the last years, they are limited for several application cases by their map representation.…”
Section: Related Workmentioning
confidence: 99%
“…Focusing on LiDAR SLAM, similarly to LOAM and its variants [11], [12], methods like LIO-SAM [13], LIO-MAPPING [14], HDL-SLAM [3] also estimate the robot poses and a 3D map, but with option of integrating additional sensor modalities such as IMU and GPS. Other methods such as like LIMO [15], LIRO [16] and LVI-SLAM [17] fuse visual and LiDAR measurements for simultaneous localization and mapping. Although these systems demonstrated significant progress in robustness and accuracy in the last years, they are limited for several application cases by their map representation.…”
Section: Related Workmentioning
confidence: 99%
“…However they require hardware-synchronized camera-lidar messages, which can only produce very low rate data. In [2], Shan et al proposed a loose integration of VIO output from VINS-Mono [11] to LIO-SAM [12], which itself loosely integrates LeGO-LOAM output [13] with IMU preintegration, in a gtsam pose-graph optimization framework. This approach can be unreliable as the whole chain depends on whether the core lidar-based process runs well.…”
Section: Related Workmentioning
confidence: 99%
“…If there is a low-texture case when lidar localization is unstable, its error can reverberate up the chain, which appears to be the case in some of our experiments that will be presented in later parts. We also note that the aforementioned works [1], [2], [9], [10], [12], [13] do not consider the integration of multiple lidars. While the MLOAM method [14] addresses this issue, it focuses purely on lidar and no camera and IMU is involved.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Lidar odometry and SLAM for creating metric maps has been widely researched in robotics to create metric maps of the environment such as Cartographer [139], Hector-SLAM [140] performing a complete SLAM using 2D lidar measurements and LOAM [141] providing a parallel lidar odometry and mapping technique to simultaneously compute the lidar velocity while creating accurate 3D maps of the environment. To further improve the accuracy, techniques have been presented which combine vision and lidar measurement as in Lidar-Monocular Visual Odometry (LIMO) [142], LVI-SLAM [143] combining robust monocular image tracking with precise depth estimates from lidar measurements for motion estimation. Methods like LIRO [144], VIRAL-SLAM [145], couple additional measurements like Ultra Wide Band (UWB) with visual and IMU sensors for robust pose estimation and map building.…”
Section: B Localization and Scene Modelingmentioning
confidence: 99%