2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2016
DOI: 10.1109/iros.2016.7759304
|View full text |Cite
|
Sign up to set email alerts
|

Monocular camera localization in 3D LiDAR maps

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
84
0

Year Published

2018
2018
2021
2021

Publication Types

Select...
4
2
2

Relationship

0
8

Authors

Journals

citations
Cited by 134 publications
(84 citation statements)
references
References 21 publications
0
84
0
Order By: Relevance
“…Moreover, in order to compare the localization performances with the state-of-the-art monocular localization in LiDAR maps [3], we calculated mean and standard deviation for both rotation and translation components over 10 runs on the sequence 00 of the KITTI odometry dataset. Our approach shows comparable values for the translation component (0.33 ± 0.22m w.r.t.…”
Section: Iterative Refinement and Overall Assessmentmentioning
confidence: 99%
See 1 more Smart Citation
“…Moreover, in order to compare the localization performances with the state-of-the-art monocular localization in LiDAR maps [3], we calculated mean and standard deviation for both rotation and translation components over 10 runs on the sequence 00 of the KITTI odometry dataset. Our approach shows comparable values for the translation component (0.33 ± 0.22m w.r.t.…”
Section: Iterative Refinement and Overall Assessmentmentioning
confidence: 99%
“…Different options have been investigated to solve the localization problem, including approaches based on both vision and Light Detection And Ranging (LiDAR); they share the exploitation of an a-priori knowledge of the environment in the localization process [3]- [5]. Localization approaches that utilize the same sensor for mapping and localization usually achieve good performances, as the map of the scene is matched to the same kind of data generated by the onboard sensor.…”
Section: Introductionmentioning
confidence: 99%
“…Moreover, the authors sought the maximum normalized mutual information between real camera measurements and these synthetic views. Caselitz et al tracked a monocular camera within a LiDAR map by matching sparse camera point clouds acquired from ORB‐SLAM with an a priori LiDAR map to find corresponding points. However, the point cloud constructed by a camera is sparse and lacks structural feature information.…”
Section: Related Workmentioning
confidence: 99%
“…For an accurate geo-localization not affected by scale drift ,prior information in a geographic information system (GIS) has been utilized in previous studies. For example, point clouds, 3D models, building footprints, and road maps arXiv:1808.08544v1 [cs.CV] 26 Aug 2018 have been proven to be efficient for correcting reconstructed 3D maps [5,6,7,8,9]. However, these priors are only available in limited situations, e.g., in an area that is observed in advance, or in an environment consisting of simply-shaped buildings.…”
Section: Introductionmentioning
confidence: 99%