2023
DOI: 10.1088/1361-6501/ad03b9
|View full text |Cite
|
Sign up to set email alerts
|

A visual SLAM method assisted by IMU and deep learning in indoor dynamic blurred scenes

Fengyu Liu,
Yi Cao,
Xianghong Cheng
et al.

Abstract: Dynamic targets in the environment can seriously affect the accuracy of SLAM systems. This article proposes a novel dynamic visual SLAM method with IMU and deep learning for indoor dynamic blurred scenes, which improves the front end of ORB-SLAM2, combining deep learning with geometric constraint to make the dynamic feature points elimination more reasonable and robust. First, a multi-directional superposition blur augmentation algorithm is added to the YOLOv5s network to compensate for errors caused by fast-m… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 33 publications
0
1
0
Order By: Relevance
“…In recent times, propelled by advancements in visual SLAM [6] and 3D LiDAR SLAM [7], multi-sensor fusion for motion and pose estimation has found extensive utility. Given that indoor robots generally move at slow speeds with frequent stops and starts, the performance of IMU under such conditions may not be ideal, whereas wheel odometry demonstrates superior performance, especially when the robot is stationary and there is no error accumulation.…”
Section: Introductionmentioning
confidence: 99%
“…In recent times, propelled by advancements in visual SLAM [6] and 3D LiDAR SLAM [7], multi-sensor fusion for motion and pose estimation has found extensive utility. Given that indoor robots generally move at slow speeds with frequent stops and starts, the performance of IMU under such conditions may not be ideal, whereas wheel odometry demonstrates superior performance, especially when the robot is stationary and there is no error accumulation.…”
Section: Introductionmentioning
confidence: 99%