2019 IEEE/CVF International Conference on Computer Vision (ICCV) 2019
DOI: 10.1109/iccv.2019.01066
|View full text |Cite
|
Sign up to set email alerts
|

Very Long Natural Scenery Image Prediction by Outpainting

Abstract: Comparing to image inpainting, image outpainting receives less attention due to two challenges in it. The first challenge is how to keep the spatial and content consistency between generated images and original input. The second challenge is how to maintain high quality in generated results, especially for multi-step generations in which generated regions are spatially far away from the initial input. To solve the two problems, we devise some innovative modules, named Skip Horizontal Connection and Recurrent C… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
107
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 97 publications
(107 citation statements)
references
References 28 publications
0
107
0
Order By: Relevance
“…Our contributions are summarized as follows: 1) We propose a mirrored input, which replaces generative image extension with an image inpainting problem and thus helps to achieve higher quality pixel gener- [1], [2] predict semantic content in the surroundings of the target image. Some methods [9], [10] perform extrapolation on texture patterns.…”
Section: Generative Image Extension (Examples From Our Results)mentioning
confidence: 99%
See 4 more Smart Citations
“…Our contributions are summarized as follows: 1) We propose a mirrored input, which replaces generative image extension with an image inpainting problem and thus helps to achieve higher quality pixel gener- [1], [2] predict semantic content in the surroundings of the target image. Some methods [9], [10] perform extrapolation on texture patterns.…”
Section: Generative Image Extension (Examples From Our Results)mentioning
confidence: 99%
“…Dataset. We use the Places subset [29] and Scenery dataset proposed in VLNS [2]. We construct the Places subset from the Places365-Challenge [29] dataset's trainset.…”
Section: Methodsmentioning
confidence: 99%
See 3 more Smart Citations