2020
DOI: 10.3390/s20154177
|View full text |Cite
|
Sign up to set email alerts
|

A Panoramic Localizer Based on Coarse-to-Fine Descriptors for Navigation Assistance

Abstract: Visual Place Recognition (VPR) addresses visual instance retrieval tasks against discrepant scenes and gives precise localization. During a traverse, the captured images (query images) would be traced back to the already existing positions in the database images, rendering vehicles or pedestrian navigation devices distinguish ambient environments. Unfortunately, diverse appearance variations can bring about huge challenges for VPR, such as illumination changing, viewpoint varying, seasonal cycling, disparate t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
6
2

Relationship

2
6

Authors

Journals

citations
Cited by 9 publications
(7 citation statements)
references
References 54 publications
0
5
0
Order By: Relevance
“…For generating correct directions, localization precision is critical. Global Positioning Systems (GPS) [39], [40], [41], [42], [43], [44], [45], [46] and image-based [47], [48], [49], [50] techniques can be used to allow PVI to identify their location.…”
Section: ) Journey Planningmentioning
confidence: 99%
“…For generating correct directions, localization precision is critical. Global Positioning Systems (GPS) [39], [40], [41], [42], [43], [44], [45], [46] and image-based [47], [48], [49], [50] techniques can be used to allow PVI to identify their location.…”
Section: ) Journey Planningmentioning
confidence: 99%
“…13(a)), which incorporates panoramic annular lens and active deep image descriptors in a visual localization system. CFVL [294], [295] designed a coarse-to-fine visual localization framework by using equirectangular representations in the coarse stage to keep feature consistency with the panorama-trained descriptor, and cube-map representations in the fine stage of key-point matching to conform to the planarity of the images. In [296], a localization framework is designed by using high-level semantic information such as detected landmarks and ground surface boundaries, which are obtained via an omnidirectional camera, and their localization is conducted via extended Kalman filter.…”
Section: Visual Localization Odometry and Slammentioning
confidence: 99%
“…Chen et al [9] design the PALVO which applies PAL to visual odometry by modifying the camera model and specially designed an initialization process. Fang et al [10] employ PAL for visual place recognition, leveraging panoramas for omnidirectional perception with FoV up to 360 • . Shi et al [12] explore 360 • optical flow estimation based on the cyclicity of panoramas.…”
Section: Panoramic Computer Vision Tasksmentioning
confidence: 99%
“…I MAGES with very large Field of View (FoV) are attracting more attention on account of more abundant information they provide about the surrounding environment for some Computer Vision (CV) tasks [1]. A considerable amount of research is carried out on semantic segmentation [2]- [4], object detection [5], [6], visual SLAM [7]- [9] and other CV tasks [10]- [12] under images with 360 • FoV, namely panoramic CV tasks. Mainstream camera models (e.g.…”
Section: Introductionmentioning
confidence: 99%