2016
DOI: 10.48550/arxiv.1607.07539
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Semantic Image Inpainting with Deep Generative Models

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
194
0

Year Published

2017
2017
2021
2021

Publication Types

Select...
5
5

Relationship

0
10

Authors

Journals

citations
Cited by 105 publications
(194 citation statements)
references
References 0 publications
0
194
0
Order By: Relevance
“…However, this is already feasible for many types of distributions. For example, image inpainting is a subfield of computer vision that has a long history [13] and much recent work (e.g., [32,31,30,28,23,33]), with plausible in-fill models available for many domains. As these generative models improve, so will the framework we proposed.…”
Section: Discussionmentioning
confidence: 99%
“…However, this is already feasible for many types of distributions. For example, image inpainting is a subfield of computer vision that has a long history [13] and much recent work (e.g., [32,31,30,28,23,33]), with plausible in-fill models available for many domains. As these generative models improve, so will the framework we proposed.…”
Section: Discussionmentioning
confidence: 99%
“…The PICS method was implemented using the BART Toolbox 32 with wavelets as the sparse transform. In order to further demonstrate the performance of our UFLoss, MoDL with 2 + perceptual VGG loss 37 was also included in our comparisons.…”
Section: Unrolled Reconstructions With Uflossmentioning
confidence: 99%
“…Since its introduction, GANs have enjoyed great empirical success, with a wide range of applications especially in image generation and natural language processing, including high resolution image generation [Denton et al, 2015, Radford et al, 2015, image inpainting [Yeh et al, 2016], image super-resolution [Ledig et al, 2017], visual manipulation [Zhu et al, 2016], text-to-image synthesis [Reed et al, 2016], video generation [Vondrick et al, 2016], semantic segmentation [Luc et al, 2016], and abstract reasoning diagram generation [Kulharia et al, 2017].…”
Section: Introductionmentioning
confidence: 99%