2014 IEEE Conference on Computer Vision and Pattern Recognition 2014
DOI: 10.1109/cvpr.2014.66
|View full text |Cite
|
Sign up to set email alerts
|

Minimal Scene Descriptions from Structure from Motion Models

Abstract: How much data do we need to describe a location? We explore this question in the context of 3D scene reconstructions created from running structure from motion on large Internet photo collections, where reconstructions can contain many millions of 3D points. We consider several methods for computing much more compact representations of such reconstructions for the task of location recognition, with the goal of maintaining good performance with very small models. In particular, we introduce a new method for com… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
96
0

Year Published

2015
2015
2022
2022

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 85 publications
(97 citation statements)
references
References 22 publications
(41 reference statements)
0
96
0
Order By: Relevance
“…Image-Based Localization. Recent progress in imagebased localization has led to methods that are now quite robust to changes in scene appearance and illumination [4,61], scale to large scenes [43,56,58,83], and are suitable for realtime computation and mobile devices [8, 33, 35, 43-45, 57, 76] with compressed map representations [15,21]. Traditional localization methods based on image retrieval [34,66] and based on learning [12,35,80,81] have the advantage of not requiring the explicit storage of 3D maps.…”
Section: Related Workmentioning
confidence: 99%
“…Image-Based Localization. Recent progress in imagebased localization has led to methods that are now quite robust to changes in scene appearance and illumination [4,61], scale to large scenes [43,56,58,83], and are suitable for realtime computation and mobile devices [8, 33, 35, 43-45, 57, 76] with compressed map representations [15,21]. Traditional localization methods based on image retrieval [34,66] and based on learning [12,35,80,81] have the advantage of not requiring the explicit storage of 3D maps.…”
Section: Related Workmentioning
confidence: 99%
“…A threshold of 224 was experimentally obtained from the model by evaluating corresponding descriptors of 3D points (similar to [5]), such that 95% of all correct matches survive. (D) A variable radius search, where the search radius is defined by 0.7 times the distance to the nearest neighbor in the query image itself.…”
Section: Experiments and Resultsmentioning
confidence: 99%
“…Impact of using 2D-2D point correspondences Results for networks trained without the additional dataset from [42] or with the correspondence loss disabled (where the clustering still is done on feature from the CMU/RobotCar images), are shown in Table 2 (row [11][12][13][14].…”
Section: Visual Localizationmentioning
confidence: 99%