2022
DOI: 10.3390/s22114114
|View full text |Cite
|
Sign up to set email alerts
|

Mobile Robot Localization and Mapping Algorithm Based on the Fusion of Image and Laser Point Cloud

Abstract: Given the lack of scale information of the image features detected by the visual SLAM (simultaneous localization and mapping) algorithm, the accumulation of many features lacking depth information will cause scale blur, which will lead to degradation and tracking failure. In this paper, we introduce the lidar point cloud to provide additional depth information for the image features in estimating ego-motion to assist visual SLAM. To enhance the stability of the pose estimation, the front-end of visual SLAM bas… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 16 publications
0
1
0
Order By: Relevance
“…We believe that this labeled dataset can be useful for training data-hungry deep learning techniques such as image segmentation [ 34 ] or 3D point cloud semantic classification [ 35 ], but it can also be employed for testing SLAM [ 36 ] or for camera and LiDAR integration [ 37 ]. Furthermore, the robotic simulations can be directly employed for reinforcement learning [ 38 ].…”
Section: Introductionmentioning
confidence: 99%
“…We believe that this labeled dataset can be useful for training data-hungry deep learning techniques such as image segmentation [ 34 ] or 3D point cloud semantic classification [ 35 ], but it can also be employed for testing SLAM [ 36 ] or for camera and LiDAR integration [ 37 ]. Furthermore, the robotic simulations can be directly employed for reinforcement learning [ 38 ].…”
Section: Introductionmentioning
confidence: 99%