Our system is currently under heavy load due to increased usage. We're actively working on upgrades to improve performance. Thank you for your patience.
2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) 2018
DOI: 10.1109/cvprw.2018.00132
|View full text |Cite
|
Sign up to set email alerts
|

New Techniques for Preserving Global Structure and Denoising with Low Information Loss in Single-Image Super-Resolution

Abstract: This work identifies and addresses two important technical challenges in single-image super-resolution: (1) how to upsample an image without magnifying noise and (2) how to preserve large scale structure when upsampling. We summarize the techniques we developed for our second place entry in Track 1 (Bicubic Downsampling), seventh place entry in Track 2 (Realistic Adverse Conditions), and seventh place entry in Track 3 (Realistic difficult) in the 2018 NTIRE Super-Resolution Challenge. Furthermore, we present n… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
17
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 25 publications
(23 citation statements)
references
References 15 publications
(18 reference statements)
0
17
0
Order By: Relevance
“…For reducing the training difficulty of the network in SR, small scaling factor, SR is performed in the beginning; in the curriculum learning-based SR, the training starts with 2× upsampling, and gradually the following scaling factors , , and so on are generated using the output of previously trained networks. ProSR ( Wang et al, 2018a ) uses the upsampled output of the previous level and linearly trains the next level using the previous one, while ADRSR ( Bei et al, 2018 ) concatenates the HR output of the previous levels and further adds another convolution layer. In CARN ( Ahn, Kang & Sohn, 2018b ), the previously generated image is entirely replaced by the next level generated image, updating the HR image in sequential order.…”
Section: Supervised Super-resolutionmentioning
confidence: 99%
See 1 more Smart Citation
“…For reducing the training difficulty of the network in SR, small scaling factor, SR is performed in the beginning; in the curriculum learning-based SR, the training starts with 2× upsampling, and gradually the following scaling factors , , and so on are generated using the output of previously trained networks. ProSR ( Wang et al, 2018a ) uses the upsampled output of the previous level and linearly trains the next level using the previous one, while ADRSR ( Bei et al, 2018 ) concatenates the HR output of the previous levels and further adds another convolution layer. In CARN ( Ahn, Kang & Sohn, 2018b ), the previously generated image is entirely replaced by the next level generated image, updating the HR image in sequential order.…”
Section: Supervised Super-resolutionmentioning
confidence: 99%
“…In the case of image super-resolution, some of the augmentation techniques are flipping, cropping, angular rotation, skew, and color degradation ( Timofte, Rothe & Van Gool, 2016 ; Lai et al, 2017 ; Lim et al, 2017 ; Tai, Yang & Liu, 2017 ; Han et al, 2018 ). Recoloring the image using channel shuffling in the LR-HR image pair is also used as data augmentation in image SR ( Bei et al, 2018 ).…”
Section: Supervised Super-resolutionmentioning
confidence: 99%
“…It first passes the image through the denoiser prior to branching into main path and skip connection. This is conceptually similar to the denoiser and SR concatenation by Bei et al [20]. One potential limitation of this approach is error propagation: if the denoiser removes information that is relevant to super-resolution, it cannot be recovered afterwards.…”
Section: Methodsmentioning
confidence: 97%
“…Furthermore, deep learning approaches have also been considered to combine denoising with a SR model [19]. The authors in [20] propose cascading a denoiser with a SR model so that the output of the denoiser is fed to the SR network. The pre-network architectural design in our experiments is similar in spirit, but instead of using a fixed convolutional neural network (CNN) denoiser, our design allows for further flexibility regarding the choice of the integrated denoiser.…”
Section: Introductionmentioning
confidence: 99%
“…The network employs motion compensated frames as input and single-image pre-training. In addition, some of the recent challenges on example-based single image SR [14,15,16], through benchmarking and introduction of SR specific datasets promoted several methods for super-resolving images [17,18,19,20,21].…”
Section: Introductionmentioning
confidence: 99%