2015 IEEE International Conference on Robotics and Automation (ICRA) 2015
DOI: 10.1109/icra.2015.7140088
|View full text |Cite
|
Sign up to set email alerts
|

Towards life-long visual localization using an efficient matching of binary sequences from images

Abstract: Life-long visual localization is one of the most challenging topics in robotics over the last few years. The difficulty of this task is in the strong appearance changes that a place suffers due to dynamic elements, illumination, weather or seasons. In this paper, we propose a novel method (ABLE-M) to cope with the main problems of carrying out a robust visual topological localization along time. The novelty of our approach resides in the description of sequences of monocular images as binary codes, which are e… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
83
0

Year Published

2017
2017
2022
2022

Publication Types

Select...
3
2
1

Relationship

1
5

Authors

Journals

citations
Cited by 86 publications
(83 citation statements)
references
References 26 publications
(39 reference statements)
0
83
0
Order By: Relevance
“…Deep features based on convolutional neural networks (CNNs) were adopted to match image sequences [31]. Global features can encode whole image information and no dictionary-based quantization is required, which showed promising performance for long-term place recognition [2,26,27,30,33].…”
Section: A Visual Features For Scene Representationmentioning
confidence: 99%
See 4 more Smart Citations
“…Deep features based on convolutional neural networks (CNNs) were adopted to match image sequences [31]. Global features can encode whole image information and no dictionary-based quantization is required, which showed promising performance for long-term place recognition [2,26,27,30,33].…”
Section: A Visual Features For Scene Representationmentioning
confidence: 99%
“…For example, the Chow Liu tree was used by the FAB-MAP SLAM [6,7]. The KD tree was implemented using FLANN to perform fast nearest neighbor search in the RTAB-MAP [21,22] and some other methods [2,21] for efficient image-to-image matching. Very recently, methods based on sparsity-inducing norms were introduced to decide the globally most similar template to the query image [24] (details in Section III-A).…”
Section: B Image Matching For Place Recognitionmentioning
confidence: 99%
See 3 more Smart Citations