2017 IEEE International Conference on Computer Vision (ICCV) 2017
DOI: 10.1109/iccv.2017.481
|View full text |Cite
|
Sign up to set email alerts
|

EnhanceNet: Single Image Super-Resolution Through Automated Texture Synthesis

Abstract: Single image super-resolution is the task of inferring a high-resolution image from a single low-resolution input. Traditionally, the performance of algorithms for this task is measured using pixel-wise reconstruction measures such as peak signal-to-noise ratio (PSNR) which have been shown to correlate poorly with the human perception of image quality. As a result, algorithms minimizing these metrics tend to produce over-smoothed images that lack highfrequency textures and do not look natural despite yielding … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

11
716
0
3

Year Published

2018
2018
2024
2024

Publication Types

Select...
5

Relationship

0
5

Authors

Journals

citations
Cited by 918 publications
(730 citation statements)
references
References 46 publications
11
716
0
3
Order By: Relevance
“…Since the choice of loss function guides learning process, defining an appropriate loss function is important in constructing a CNN framework. Pixel-wise loss functions such as L1 loss 14,36,39 or L2 loss 16,19,20 have been widely used for image super-resolution and denoising, but blurred image quality is recently considered as one major issue in image generation using CNNs. 15,17,19 As one potential way to overcome the problem, adversarial loss has been proposed.…”
Section: Discussionmentioning
confidence: 99%
See 4 more Smart Citations
“…Since the choice of loss function guides learning process, defining an appropriate loss function is important in constructing a CNN framework. Pixel-wise loss functions such as L1 loss 14,36,39 or L2 loss 16,19,20 have been widely used for image super-resolution and denoising, but blurred image quality is recently considered as one major issue in image generation using CNNs. 15,17,19 As one potential way to overcome the problem, adversarial loss has been proposed.…”
Section: Discussionmentioning
confidence: 99%
“…Another recent technique for better perceptual image quality is inspired by a generative adversarial network (GAN), composed of a generator CNN and a discriminator CNN. 16,19,24,25 While the generator was trained to generate HR images close to the ground truths, the discriminator was simultaneously trained to distinguish the generated HR images from the ground truths. Since the discriminator error is back-propagated to the generator, the errors of the discriminator and the generator are adversarial, yielding an adversarial loss.…”
Section: A Related Workmentioning
confidence: 99%
See 3 more Smart Citations