2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2016
DOI: 10.1109/cvpr.2016.181
|View full text |Cite
|
Sign up to set email alerts
|

Deeply-Recursive Convolutional Network for Image Super-Resolution

Abstract: We propose an image super-resolution method (SR) using a deeply-recursive convolutional network (DRCN). Our network has a very deep recursive layer (up to 16 recursions). Increasing recursion depth can improve performance without introducing new parameters for additional convolutions. Albeit advantages, learning a DRCN is very hard with a standard gradient descent method due to exploding/vanishing gradients. To ease the difficulty of training, we propose two extensions: recursive-supervision and skip-connectio… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

5
1,499
0
3

Year Published

2017
2017
2022
2022

Publication Types

Select...
6
3

Relationship

0
9

Authors

Journals

citations
Cited by 2,327 publications
(1,507 citation statements)
references
References 22 publications
(45 reference statements)
5
1,499
0
3
Order By: Relevance
“…The second limitation of the earlier studies is that the network depth was shallower (3 to 5 layers) than that of most networks used in recent image‐restoration studies . In some studies, deep CNNs with layer depths greater than 20 afford much more promising results than shallower networks because of their larger receptive fields . In the present study, we exploited CNNs with layer depths greater than or equal to 20 and compared their results with those of shallower networks (e.g., 3 layers).…”
Section: Introductionmentioning
confidence: 99%
“…The second limitation of the earlier studies is that the network depth was shallower (3 to 5 layers) than that of most networks used in recent image‐restoration studies . In some studies, deep CNNs with layer depths greater than 20 afford much more promising results than shallower networks because of their larger receptive fields . In the present study, we exploited CNNs with layer depths greater than or equal to 20 and compared their results with those of shallower networks (e.g., 3 layers).…”
Section: Introductionmentioning
confidence: 99%
“…In the training process, the images in the dataset were cropped to the size of the patch image (33,33). The input image size of the CNN model was (33,33), and the size of each convolution kernel was (9,9), so the output feature size was (33−9+1,33-9+1,20) in first layer, which is (25,25,20). The second layer had 10 kernels that were (5,5), so the size of output feature was (25-5+1,25-5+1,10), which is (21,21,10).…”
Section: Methodsmentioning
confidence: 99%
“…This study presents the CNN model for directly learning end-to-end mapping between the low-and high-resolution images. The image super-resolution method based on CNN has been studied [25,26]. In Dietrich et al [27], the convolutional neural network model was applied to image deburring.…”
Section: Introductionmentioning
confidence: 99%
“…Robust methods using deep-learning were also implemented to map a model from Low Resolution to High Resolution patches trying to find the best regression functions to this mapping as in [13], [14], [15], [16]. Among these several successful examples, the Super-Resolution Convolutional Neural Network (SRCNN) [17] has proved to be a good alternative for an end-to-end approach in super-resolution.…”
Section: Introductionmentioning
confidence: 99%