2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2021
DOI: 10.1109/cvpr46437.2021.00642
|View full text |Cite
|
Sign up to set email alerts
|

Coming Down to Earth: Satellite-to-Street View Synthesis for Geo-Localization

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
68
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 93 publications
(69 citation statements)
references
References 29 publications
1
68
0
Order By: Relevance
“…2D ground-to-satellite image-based localization. Many recent works [1], [2], [3], [4], [5], [6], [7], [8], [9], [10], [38], [39], [40], [41], [42], [43], [44], [45] resort to satellite images as a reference set for image-based camera localization, due to the wide-spread coverage and easy accessibility of satellite imagery. Challenges of ground-to-satellite image matching include the significant visual appearance differences, geometric projection differences, and the unknown relative orientation between the two view images, as well as the limited FoV of query ground images.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…2D ground-to-satellite image-based localization. Many recent works [1], [2], [3], [4], [5], [6], [7], [8], [9], [10], [38], [39], [40], [41], [42], [43], [44], [45] resort to satellite images as a reference set for image-based camera localization, due to the wide-spread coverage and easy accessibility of satellite imagery. Challenges of ground-to-satellite image matching include the significant visual appearance differences, geometric projection differences, and the unknown relative orientation between the two view images, as well as the limited FoV of query ground images.…”
Section: Related Workmentioning
confidence: 99%
“…Challenges of ground-to-satellite image matching include the significant visual appearance differences, geometric projection differences, and the unknown relative orientation between the two view images, as well as the limited FoV of query ground images. Existing works have focused on designing powerful network architectures [1], [3], [4], [7], [41], [43], bridging the cross-view domain gaps [6], [8], [9], [10], [45], and learning orientation invariant or equivariant features [4], [5], [10], [43], [44].…”
Section: Related Workmentioning
confidence: 99%
“…[Shi et al, 2020a] addressed the cross-view domain gap by applying a polar transform to the aerial images to approximately align the images up to an unknown azimuth angle. [Toker et al, 2021] proposed a GAN structure to create realistic street views from satellite images and localizes the corresponding query street-view simultaneously in an end-to-end manner. [Wang et al, 2021] proposed a square chunking strategy to fully extract useful information from the edges, and the chunking scheme results in a significant improvement in localization performance.…”
Section: Deeply-learned Feature For Geo-localizationmentioning
confidence: 99%
“…Meanwhile, GANs provide a feasible way to augment data in few-shot learning tasks [24]. As for autonomous driving, most methods focus on street view synthesis [25], [26], traffic sign generation [27], [28] and style transfer [29], [30]. Few works [31], [32] are designed for traffic lights.…”
Section: A Image and Video Generationmentioning
confidence: 99%