2017 IEEE International Conference on Computer Vision (ICCV) 2017
DOI: 10.1109/iccv.2017.514
|View full text |Cite
|
Sign up to set email alerts
|

Image Super-Resolution Using Dense Skip Connections

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
714
0
5

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
5

Relationship

0
10

Authors

Journals

citations
Cited by 1,092 publications
(719 citation statements)
references
References 18 publications
0
714
0
5
Order By: Relevance
“…The authors removed some unnecessary modules (e.g., Batch Normalization) of the SRResNet [16] to obtain better results. Based on EDSR, Zhang et al incorporated densely connected block [9,27] into residual block [7] to construct a residual dense network (RDN). Soon they exploited the residualin-residual architecture for the very deep model and introduced channel attention mechanism [8] to form the very deep residual attention networks (RCAN) [36].…”
Section: Introductionmentioning
confidence: 99%
“…The authors removed some unnecessary modules (e.g., Batch Normalization) of the SRResNet [16] to obtain better results. Based on EDSR, Zhang et al incorporated densely connected block [9,27] into residual block [7] to construct a residual dense network (RDN). Soon they exploited the residualin-residual architecture for the very deep model and introduced channel attention mechanism [8] to form the very deep residual attention networks (RCAN) [36].…”
Section: Introductionmentioning
confidence: 99%
“…To start, we briefly present the difference between ResNet [11] and DenseNet [12], [32]. The residual path reuses features implicitly, but it does not have any impact on the exploration of new features.…”
Section: B Densely Connected Residual Networkmentioning
confidence: 99%
“…methods. Therefore, research in the field has long relied on the use of known degradation operators such as bicubic kernel in order to artificially generate a corresponding LR image [9,31,34]. While this straight-forward approach enables simple and efficient benchmarking and generation of virtually unlimited training data, it comes with significant drawbacks.…”
Section: Introductionmentioning
confidence: 99%