2023
DOI: 10.1016/j.ast.2023.108484
|View full text |Cite
|
Sign up to set email alerts
|

Hybrid iteration and optimization-based three-dimensional reconstruction for space non-cooperative targets with monocular vision and sparse lidar fusion

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
0
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(1 citation statement)
references
References 37 publications
0
0
0
Order By: Relevance
“…In the field of target detection and tracking, by obtaining the extrinsic parameters between the camera and LiDAR sensor, the data from the two can be aligned in the same coordinate system to achieve multi-sensor fusion target detection and tracking tasks, thereby improving tracking accuracy. In the field of 3D reconstruction [15][16][17], obtaining an accurate spatial relationship between the two is conducive to obtaining richer and more accurate 3D scene information. Combining the high-precision 3D information from the LiDAR sensor and the RGB information from the camera ensures more efficient and reliable 3D reconstruction results.…”
Section: Introductionmentioning
confidence: 99%
“…In the field of target detection and tracking, by obtaining the extrinsic parameters between the camera and LiDAR sensor, the data from the two can be aligned in the same coordinate system to achieve multi-sensor fusion target detection and tracking tasks, thereby improving tracking accuracy. In the field of 3D reconstruction [15][16][17], obtaining an accurate spatial relationship between the two is conducive to obtaining richer and more accurate 3D scene information. Combining the high-precision 3D information from the LiDAR sensor and the RGB information from the camera ensures more efficient and reliable 3D reconstruction results.…”
Section: Introductionmentioning
confidence: 99%