2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2019
DOI: 10.1109/cvpr.2019.00252
|View full text |Cite
|
Sign up to set email alerts
|

Multi-Channel Attention Selection GAN With Cascaded Semantic Guidance for Cross-View Image Translation

Abstract: Cross-view image translation is challenging because it involves images with drastically different views and severe deformation. In this paper, we propose a novel approach named Multi-Channel Attention SelectionGAN (Selection-GAN) that makes it possible to generate images of natural scenes in arbitrary viewpoints, based on an image of the scene and a novel semantic map. The proposed SelectionGAN explicitly utilizes the semantic information and consists of two stages. In the first stage, the condition image and … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
5

Citation Types

0
215
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
4
1
1

Relationship

1
5

Authors

Journals

citations
Cited by 187 publications
(215 citation statements)
references
References 36 publications
0
215
0
Order By: Relevance
“…We compare our P-GAN with both single-view image generation methods [4,5] and cross-view image generation methods [6,7]. We adopt the same experimental setup as in [4,6,7]. All images are scaled to 256×256.…”
Section: Experimental Results Datasetsmentioning
confidence: 99%
See 4 more Smart Citations
“…We compare our P-GAN with both single-view image generation methods [4,5] and cross-view image generation methods [6,7]. We adopt the same experimental setup as in [4,6,7]. All images are scaled to 256×256.…”
Section: Experimental Results Datasetsmentioning
confidence: 99%
“…In our experiments, we set λ 1 =10, λ 2 =10, λ 3 =100, λ 4 =10, λ 5 =1, λ 6 =1 in Eq. (4), (7), (8) and (9), respectively. The stateof-the-art cross-view generation methods, i.e., X-Fork [6], X-Seq [6] and SelectionGAN [7] utilize segmentation map to facilitate target view image generation.…”
Section: Experimental Results Datasetsmentioning
confidence: 99%
See 3 more Smart Citations