2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition 2018
DOI: 10.1109/cvpr.2018.00262
|View full text |Cite
|
Sign up to set email alerts
|

Residual Dense Network for Image Super-Resolution

Abstract: A very deep convolutional neural network (CNN) has recently achieved great success for image super-resolution (SR) and offered hierarchical features as well. However, most deep CNN based SR models do not make full use of the hierarchical features from the original low-resolution (LR) images, thereby achieving relatively-low performance. In this paper, we propose a novel residual dense network (RDN) to address this problem in image SR. We fully exploit the hierarchical features from all the convolutional layers… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

6
2,329
1
7

Year Published

2019
2019
2023
2023

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 3,050 publications
(2,353 citation statements)
references
References 39 publications
6
2,329
1
7
Order By: Relevance
“…In order to verify the validity of the model , we compare the performance on five standard benchmark datasets: Set5 [1], Set14 [27], B100 [17], Urban100 [8], and manga109 [18]. In terms of PSNR, SSIM and visual effects, We compare our models with the state-of-theart methods including Bicubic, SRCNN [5], VDSR [10], LapSRN [12], MemNet [22], EDSR [16], RDN [32], RCAN [31], SAN [4]. We also adopt self-ensemble strategy [16] to further improve our ADCSR and denote the selfensembled ADCSR as ADCSR+.…”
Section: Results With Bicubic Degradationmentioning
confidence: 99%
See 2 more Smart Citations
“…In order to verify the validity of the model , we compare the performance on five standard benchmark datasets: Set5 [1], Set14 [27], B100 [17], Urban100 [8], and manga109 [18]. In terms of PSNR, SSIM and visual effects, We compare our models with the state-of-theart methods including Bicubic, SRCNN [5], VDSR [10], LapSRN [12], MemNet [22], EDSR [16], RDN [32], RCAN [31], SAN [4]. We also adopt self-ensemble strategy [16] to further improve our ADCSR and denote the selfensembled ADCSR as ADCSR+.…”
Section: Results With Bicubic Degradationmentioning
confidence: 99%
“…Lim et al proposed an enhanced depth residual network (EDSR) [16], which made a significant performance through the deeper network. Other deep network like RDN [32] and Mem-Net [22], are based on dense blocks. Some networks focus on feature correlations in channel dimension, such as RCAN [31]and SAN [4].…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Thus, taking full advantage of multi‐scale features is more important than learning more redundant features. In our paper, we propose a parallel structure based on RDB [ZTK*18] to extract low‐ and high‐level information effectively in a wider network. The residual and dense structure allows the network to learn low‐level information and extracts features than the normal convolution.…”
Section: Related Workmentioning
confidence: 99%
“…The first block is the Feature Extraction block (FEB), followed by the Feedback block (FBB) and an HDR reconstruction block (HRB). Inspired by [10], we use a global residual skip connection for bypassing low level LDR features at every iteration to guide the HDR reconstruction block in the final layers. For every training example, the network runs for n iterations.…”
Section: Fig 1: Fhdr Architecturementioning
confidence: 99%