2018
DOI: 10.1016/j.patcog.2017.11.003
|View full text |Cite
|
Sign up to set email alerts
|

Accurate and efficient ground-to-aerial model alignment

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
12
0

Year Published

2018
2018
2021
2021

Publication Types

Select...
8

Relationship

0
8

Authors

Journals

citations
Cited by 20 publications
(13 citation statements)
references
References 19 publications
0
12
0
Order By: Relevance
“…• Multi-source 3D data fusion. Few attempts have been carried out in the fusion of aerial and ground-based 3D point clouds or models [124]. The large differences in camera viewpoints and scales impose a tricky issue to the alignment of aerial and ground 3D data.…”
Section: Dense Reconstructionmentioning
confidence: 99%
“…• Multi-source 3D data fusion. Few attempts have been carried out in the fusion of aerial and ground-based 3D point clouds or models [124]. The large differences in camera viewpoints and scales impose a tricky issue to the alignment of aerial and ground 3D data.…”
Section: Dense Reconstructionmentioning
confidence: 99%
“…To further evaluate the generalization ability of the proposed LMPG in different scenes, we also included those public datasets in experiments. As mentioned in [10], few largescale publicly available datasets have ground and aerial images for the same architectural scene. Many released datasets are specifically meant for the task of urban 3D reconstruction, and each dataset only contains complete ground and aerial image pairs for one building.…”
Section: A Datasetsmentioning
confidence: 99%
“…Image matching aims to find correspondence between two or multiple overlapped images, which have been extensively studied under different baselines and scenarios [9], [10], [11], [12], [13], [14]. The problem of image matching is typically solved by three steps: local feature extraction and description, putative match construction, and mismatch removal.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Some of the researchers have tried to correct the differences in view angle to allow for matching while using classical local features. One category of methods is based on the camera pose and 3D geometric information (e.g., stereo models [21], depth maps [22], and 3D sparse meshes [23]) to generate synthetic images that are approximately aligned to the viewpoint of the target image, and thus realize view-dependent SIFT matching. However, these methods require the reconstruction of the dense 3D geometry for the images, and they are strongly restricted by the quality of the reconstructed 3D information.…”
Section: Introductionmentioning
confidence: 99%