2019
DOI: 10.2991/ijcis.d.191209.001
|View full text |Cite
|
Sign up to set email alerts
|

An Advanced Deep Residual Dense Network (DRDN) Approach for Image Super-Resolution

Abstract: In recent years, more and more attention has been paid to single image super-resolution reconstruction (SISR) by using deep learning networks. These networks have achieved good reconstruction results, but how to make better use of the feature information in the image, how to improve the network convergence speed, and so on still need further study. According to the above problems, a novel deep residual dense network (DRDN) is proposed in this paper. In detail, DRDN uses the residual-dense structure for local f… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
24
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
10

Relationship

0
10

Authors

Journals

citations
Cited by 60 publications
(24 citation statements)
references
References 18 publications
0
24
0
Order By: Relevance
“…(1) Difference from DRDN [33]: In DRDN, there are dense block (DB) structures for feature exploitation. e entire DRDN holds a global residual dense connection design to efficiently process the features.…”
Section: Discussionmentioning
confidence: 99%
“…(1) Difference from DRDN [33]: In DRDN, there are dense block (DB) structures for feature exploitation. e entire DRDN holds a global residual dense connection design to efficiently process the features.…”
Section: Discussionmentioning
confidence: 99%
“…Kaiming He et al proposed a residual net (RESNET) [24], which solved the problem of network degradation by using skip connections. Wang et al proposed a deep residual dense network based on ResNet for recognizing super-resolution images with high accuracy [25].…”
Section: Convolutional Neural Networkmentioning
confidence: 99%
“…7. ResNets are stacked by residual blocks, so the network is easier to optimize and its depth can be greatly increased, both of which can improve the recognition accuracy [31].…”
Section: Resnetsmentioning
confidence: 99%