2016
DOI: 10.1007/978-3-319-25781-5_8
|View full text |Cite
|
Sign up to set email alerts
|

Worldwide Pose Estimation Using 3D Point Clouds

Abstract: Abstract. We address the problem of determining where a photo was taken by estimating a full 6-DOF-plus-intrincs camera pose with respect to a large geo-registered 3D point cloud, bringing together research on image localization, landmark recognition, and 3D pose estimation. Our method scales to datasets with hundreds of thousands of images and tens of millions of 3D points through the use of two new techniques: a co-occurrence prior for RANSAC and bidirectional matching of image features with 3D points. We ev… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
271
0
2

Year Published

2016
2016
2023
2023

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 131 publications
(274 citation statements)
references
References 31 publications
1
271
0
2
Order By: Relevance
“…A building front coordinate system associated to each facade of the 3D building models. 2 This coordinate system is defined as shown in Fig. 1b.…”
Section: Notationmentioning
confidence: 99%
See 3 more Smart Citations
“…A building front coordinate system associated to each facade of the 3D building models. 2 This coordinate system is defined as shown in Fig. 1b.…”
Section: Notationmentioning
confidence: 99%
“…For example, Q r represents the 1 The DEM used is a geometric model where each road is presented by a plane. 2 The 3D building models used are a geometric model where each facade is presented by a plane.…”
Section: Notationmentioning
confidence: 99%
See 2 more Smart Citations
“…Based on the pose of the image, high level analysis could be applied [5][6][7][8][9][10][11][12][13][14][15][16]. Generally, there are three key steps in a single image-based localization system [17,18]: (1) 2D feature extraction (e.g., SIFT [19]) from the querying image, (2) matching between these 2D features and the pre-built 3D feature point cloud, and (3) camera pose estimation by solving a perspectiven-point (PNP) problem [20][21][22]. The 3D feature point cloud is usually reconstructed from many captured images at offline stage by using a conventional 3D reconstruction algorithm [4,23].…”
Section: Introductionmentioning
confidence: 99%