Procedings of the British Machine Vision Conference 2012 2012
DOI: 10.5244/c.26.76
|View full text |Cite
|
Sign up to set email alerts
|

Image Retrieval for Image-Based Localization Revisited

Abstract: To reliably determine the camera pose of an image relative to a 3D point cloud of a scene, correspondences between 2D features and 3D points are needed. Recent work has demonstrated that directly matching the features against the points outperforms methods that take an intermediate image retrieval step in terms of the number of images that can be localized successfully. Yet, direct matching is inherently less scalable than retrievalbased approaches. In this paper, we therefore analyze the algorithmic factors t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
233
0

Year Published

2013
2013
2022
2022

Publication Types

Select...
4
2
1

Relationship

1
6

Authors

Journals

citations
Cited by 283 publications
(245 citation statements)
references
References 40 publications
(97 reference statements)
2
233
0
Order By: Relevance
“…For both Dubrovnik and Rome, our top 10 success rate (99.5% on Dubrovnik and 99.7% on Rome) is comparable to the results of [18] (100% / 99.7%), which uses direct 3D matching, requiring much more memory and expensive nearest neighbor computations. Our performance on Aachen dataset (89.16%) also rivals that of [26], where their best result 89.97% is achieved with a relatively expensive method, while we only use the compact set of weights learned from neighborhoods. In all cases, we improve the top k accuracies over BoW retrieval techniques, resulting in a better ranking for the final step of geometric consistency check procedure.…”
Section: Performance Evaluationmentioning
confidence: 84%
See 2 more Smart Citations
“…For both Dubrovnik and Rome, our top 10 success rate (99.5% on Dubrovnik and 99.7% on Rome) is comparable to the results of [18] (100% / 99.7%), which uses direct 3D matching, requiring much more memory and expensive nearest neighbor computations. Our performance on Aachen dataset (89.16%) also rivals that of [26], where their best result 89.97% is achieved with a relatively expensive method, while we only use the compact set of weights learned from neighborhoods. In all cases, we improve the top k accuracies over BoW retrieval techniques, resulting in a better ranking for the final step of geometric consistency check procedure.…”
Section: Performance Evaluationmentioning
confidence: 84%
“…We evaluate our algorithm on the Dubrovnik and Rome datasets [17] and the Aachen dataset [26]; these are summarized in Table 1, along with statistics over the neighborhoods we compute. To represent images as BoW histograms, we learn two kinds of visual vocabularies [22]: one vocabulary learned from each dataset itself (a specific vocabulary) and another shared vocabulary learned from ∼20,000 randomly sampled images from an unrelated dataset (a generic vocabulary).…”
Section: Datasets and Preprocessingmentioning
confidence: 99%
See 1 more Smart Citation
“…In this section, we evaluate the performance of our algorithm on several datasets, including the Dubrovnik dataset of Li et al [9], the Aachen dataset of Sattler et al [16], and the much larger Landmarks dataset [10]; these three datasets are summarized in Table 1.…”
Section: Methodsmentioning
confidence: 99%
“…We evaluate three approaches to computing minimal scene descriptions: the K-cover algorithm (KC) [9], our initial point set selection algorithm only (KCD), and our Dataset # DB Imgs # 3D Points # Queries Dubrovnik [9] 6,044 1,886,884 800 Aachen [16] 4,479 1,980,036 369 Landmarks [10] 205,813 38,190,865 10,000 full approach including the probabilistic K-cover algorithm (KCP). All methods output a list of points to keep in the original 3D point cloud database.…”
Section: Methodsmentioning
confidence: 99%