This paper addresses the problem of featurebased robot localization in large-size environments. With recent progress in SLAM techniques, it has become crucial for a robot to estimate the self-position in real-time with respect to a largesize map that can be incrementally build by other mapper robots. Self-localization using large-size maps have been studied in litelature, but most of them assume that a complete map is given prior to the self-localization task. In this paper, we present a novel scheme for robot localization as well as map representation that can successfully work with large-size and incremental maps. This work combines our two previous works on incremental methods, iLSH and iRANSAC, for appearancebased and position-based localization.
With the recent success of visual features from deep convolutional neural networks (DCNN) in visual robot selflocalization, it has become important and practical to address more general self-localization scenarios. In this paper, we address the scenario of self-localization from images with small overlap. We explicitly introduce a localization difficulty index as a decreasing function of view overlap between query and relevant database images and investigate performance versus difficulty for challenging cross-view self-localization tasks. We then reformulate the self-localization as a scalable bag-ofvisual-features (BoVF) scene retrieval and present an efficient solution called PCA-NBNN, aiming to facilitate fast and yet discriminative correspondence between partially overlapping images. The proposed approach adopts recent findings in discriminativity preserving encoding of DCNN features using principal component analysis (PCA) and cross-domain scene matching using naive Bayes nearest neighbor distance metric (NBNN). We experimentally demonstrate that the proposed PCA-NBNN framework frequently achieves comparable results to previous DCNN features and that the BoVF model is significantly more efficient. We further address an important alternative scenario of "self-localization from images with NO overlap" and report the result.
Retrieving a large collection of environment maps built by mapper robots is a key problem in mobile robot self-localization. The map retrieval problem is studied from the novel perspective of the multi-scale Bag-Of-Features (BOF) approach in this paper. In general, the multi-scale approach is advantageous in capturing both the global structure and the local details of a given map. BOF map retrieval is advantageous in its compact map representation as well as the efficient map retrieval using an inverted file system. The main contribution of this paper is combining the advantages of both approaches. Our approach is based on multi cue BOF as well as packing BOF, and achieves the efficiency and compactness of the map retrieval system. Experiments evaluate the effectiveness of the techniques presented using a large collection of environment maps.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.