2017
DOI: 10.1007/978-3-319-65292-4_10
|View full text |Cite
|
Sign up to set email alerts
|

Correction Algorithm of LIDAR Data for Mobile Robots

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(4 citation statements)
references
References 6 publications
0
4
0
Order By: Relevance
“…In 2017, Liya Han et al [ 97 , 98 , 99 ] installed a binocular vision system with a structured light projector on an industrial robot to form a 3D measurement system for high-temperature metal components. Through the continuous movement of the structured light scanner driven by a robot, 3D data can be obtained from multiple viewing angles, which solves the limitations of environmental occlusion and a single viewing angle.…”
Section: Multi-view Stereo Vision Measurement Methodsmentioning
confidence: 99%
“…In 2017, Liya Han et al [ 97 , 98 , 99 ] installed a binocular vision system with a structured light projector on an industrial robot to form a 3D measurement system for high-temperature metal components. Through the continuous movement of the structured light scanner driven by a robot, 3D data can be obtained from multiple viewing angles, which solves the limitations of environmental occlusion and a single viewing angle.…”
Section: Multi-view Stereo Vision Measurement Methodsmentioning
confidence: 99%
“…The Lidar motion distortion removal method used in our navigation system is the same as that in reference [29]. Feature extraction and motion calculation methods are the same as references [18,19].…”
Section: The Imu Pre-integration Factor and The Lidar Factormentioning
confidence: 99%
“…The correction equation for the state is shown in Equation ( 23), where K is the Kalman gain as shown in Equation (24). The correction equation for error covariance is shown in Equation (25).…”
Section: R Aruco Worldmentioning
confidence: 99%
“…When collecting the laser point cloud, first perform motion compensation on each point, align the timestamp, and project a period of the point cloud onto a frame of the point cloud image, which is recorded as the frame n [25]. Then, perform feature extraction on the frame point cloud image.…”
Section: Lidar Factormentioning
confidence: 99%