2020 IEEE International Conference on Robotics and Automation (ICRA) 2020
DOI: 10.1109/icra40945.2020.9196769
|View full text |Cite
|
Sign up to set email alerts
|

Online LiDAR-SLAM for Legged Robots with Robust Registration and Deep-Learned Loop Closure

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
14
0
2

Year Published

2021
2021
2022
2022

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 33 publications
(16 citation statements)
references
References 28 publications
0
14
0
2
Order By: Relevance
“…Kinematic-Inertial Odometry is effective for highfrequency estimation over a short time interval, whereas Visual-Inertial Odometry, through the use of lower frequencies exteroceptive sensors, remains reliable over a longer period. As both odometries are available on the robot and the experiments do not require a high-frequency state estimation, it was decided to use the Visual-Inertial Odometry instead of the Kinematic-Inertial Odometry, used in the initial system [10], [13].…”
Section: Visual-inertial Odometry Initializationmentioning
confidence: 99%
“…Kinematic-Inertial Odometry is effective for highfrequency estimation over a short time interval, whereas Visual-Inertial Odometry, through the use of lower frequencies exteroceptive sensors, remains reliable over a longer period. As both odometries are available on the robot and the experiments do not require a high-frequency state estimation, it was decided to use the Visual-Inertial Odometry instead of the Kinematic-Inertial Odometry, used in the initial system [10], [13].…”
Section: Visual-inertial Odometry Initializationmentioning
confidence: 99%
“…SLAM describes the problem of mapping the environment while self-localizing in the map being built. In order to accomplish these tasks, multiple sensors can be used, including Lidar [ 7 ], inertial measurement units (IMUs) [ 8 ], cameras (RGB [ 6 ], stereo [ 9 ], RGB-D [ 10 ]) or a combination of visual and inertial sensors [ 11 ]. SLAM that only uses cameras as sensors, is usually called visual SLAM, referred to as vSLAM in this paper, and is the one discussed in this work.…”
Section: State-of-the-artmentioning
confidence: 99%
“…For instance, in Lidar Odometry And Mapping (LOAM), lidar data are represented as point clouds and robot relative location is achieved by directly matching edge and planar features extracted from point clouds [3]. Recent advances in machine learning algorithms have further accelerated this trend [4,5].…”
Section: Introductionmentioning
confidence: 99%