Procedings of the British Machine Vision Conference 2015 2015
DOI: 10.5244/c.29.70
|View full text |Cite
|
Sign up to set email alerts
|

Robust Direct Visual Localisation using Normalised Information Distance

Abstract: Real-time visual localisation is a key technology enabling mobile location applications [7], virtual and augmented reality [1] and robotics [3]. The recent availablity of low-cost GPU hardware and GPGPU programming has enabled a new class of 'direct' visual localisation methods that make use of every pixel from an input image for tracking and matching [6], in contrast to traditional feature-based methods that only use a subset of the input image. The additional information available to direct methods localisin… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
16
0

Year Published

2016
2016
2023
2023

Publication Types

Select...
3
3
1

Relationship

1
6

Authors

Journals

citations
Cited by 29 publications
(16 citation statements)
references
References 43 publications
(2 reference statements)
0
16
0
Order By: Relevance
“…Consistent 3D models are also used to perform VBL task. For indoor localization, works from [183,145] use textured model reconstructed from RGB-D sensor [183] or hand-crafted with dedicated software [145]. City-scale models are used by [10,157,146,144,29] to perform outdoor VBL.…”
Section: Geometric Informationmentioning
confidence: 99%
See 1 more Smart Citation
“…Consistent 3D models are also used to perform VBL task. For indoor localization, works from [183,145] use textured model reconstructed from RGB-D sensor [183] or hand-crafted with dedicated software [145]. City-scale models are used by [10,157,146,144,29] to perform outdoor VBL.…”
Section: Geometric Informationmentioning
confidence: 99%
“…Another widely used method for combining data of different types consists of projecting one of the engaged data into the representation space of the other. For instance, lot of methods consider the challenging problem of registering photographies upon 3D models [180,80,7,145,146,144]. Similarity comparison is performed thanks to synthetic images generated from the 3D models [170,117,10,157].…”
Section: Cross-data Localizationmentioning
confidence: 99%
“…Since visual and laser information are represented in different modalities, there are generally two ways to align vision-tracked trajectories with the laser map [7]. One way [6]- [8] is synthesizing 2D images from the 3D laser map using the intensity or depth information. The currently observed images are matched with the synthesized ones to calculate the relative transformations.…”
Section: Related Workmentioning
confidence: 99%
“…Prior works exploiting known map appearance for precise monocular pose estimation [10,17,19,22] employ a textured depth map within an iterative optimization framework to compute the warp that minimizes a photometric cost function between a rendered image and the live image such as the Normalized Information Distance [19], that is robust to illumination change, or a Sum of Squared Differences cost function with an affine illumination model to tackle illumination change [17]. Both algorithms rely on initialization for tracking via a GPS prior or an ORB-based bagof-words approach, respectively, and expensive ray casted dense textured data for refinement.…”
Section: Introductionmentioning
confidence: 99%