2018 International Conference on 3D Vision (3DV) 2018
DOI: 10.1109/3dv.2018.00059
|View full text |Cite
|
Sign up to set email alerts
|

Generative Adversarial Frontal View to Bird View Synthesis

Abstract: Environment perception is an important task with great practical value and bird view is an essential part for creating panoramas of surrounding environment. Due to the large gap and severe deformation between the frontal view and bird view, generating a bird view image from a single frontal view is challenging. To tackle this problem, we propose the BridgeGAN, i.e., a novel generative model for bird view synthesis. First, an intermediate view, i.e., homography view, is introduced to bridge the large gap. Next,… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
28
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 51 publications
(28 citation statements)
references
References 47 publications
0
28
0
Order By: Relevance
“…We demonstrate in this paper that we are able to generate reliable, improved IPM for larger scenes than in [16], which are therefore able to directly aid scene understanding tasks. We achieve this in real time using real-world data collected under different conditions with a single front-facing camera.…”
Section: Arxiv:181200913v2 [Cscv] 2 May 2019mentioning
confidence: 91%
See 3 more Smart Citations
“…We demonstrate in this paper that we are able to generate reliable, improved IPM for larger scenes than in [16], which are therefore able to directly aid scene understanding tasks. We achieve this in real time using real-world data collected under different conditions with a single front-facing camera.…”
Section: Arxiv:181200913v2 [Cscv] 2 May 2019mentioning
confidence: 91%
“…State-of-the-art approaches for cross-domain image translation tasks train (conditional) Generative Adversarial Networks (GANs) to transform images to a new domain [14], [15]. However, these methods are designed to perform aligned appearance transformations and struggle when views change drastically [16]. The latter work, in which a synthetic dataset with perfect ground-truth labels is used to learn IPM, is closest to ours.…”
Section: Arxiv:181200913v2 [Cscv] 2 May 2019mentioning
confidence: 99%
See 2 more Smart Citations
“…Cross-view relations have been explored in [46,32,11] with more challenging settings of aerial and ground views, where there is minimum semantic and viewpoint overlap between the objects in the images. Cross-view image synthesis between these contrasting domains has attracted wide interests lately [32,33,9,49] with the popularity of GANs; these works have been successful in image translation between aerial and ground-level cropped (single camera) images. Zhai et al [46] explored the possibility of synthesizing ground-level panorama from ground semantic layouts wherein the layouts were predicted from the semantic maps of the aerial images.…”
Section: Domain Transfer and Gansmentioning
confidence: 99%