2018
DOI: 10.1007/978-3-030-01234-2_18
|View full text |Cite
|
Sign up to set email alerts
|

Image Super-Resolution Using Very Deep Residual Channel Attention Networks

Abstract: Convolutional neural network (CNN) depth is of crucial importance for image super-resolution (SR). However, we observe that deeper networks for image SR are more difficult to train. The lowresolution inputs and features contain abundant low-frequency information, which is treated equally across channels, hence hindering the representational ability of CNNs. To solve these problems, we propose the very deep residual channel attention networks (RCAN). Specifically, we propose a residual in residual (RIR) structu… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

4
2,929
0
9

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
4
1

Relationship

0
9

Authors

Journals

citations
Cited by 3,060 publications
(2,942 citation statements)
references
References 45 publications
(181 reference statements)
4
2,929
0
9
Order By: Relevance
“…A large variety of successful network architectures have been developed in other areas of image processing, e.g. FFDNet [36] or RCAN [37]. Furthermore, the denoiser may be trained on smaller patches of the reconstruction, rather than on the entire map as done here, and the patches may be combined using a sliding window.…”
Section: Conclusion and Discussionmentioning
confidence: 99%
“…A large variety of successful network architectures have been developed in other areas of image processing, e.g. FFDNet [36] or RCAN [37]. Furthermore, the denoiser may be trained on smaller patches of the reconstruction, rather than on the entire map as done here, and the patches may be combined using a sliding window.…”
Section: Conclusion and Discussionmentioning
confidence: 99%
“…Their network is more efficient, because it builds the HR image only at the very end. Other works, such as [29], proposed deeper architectures, focusing strictly on accuracy. Indeed, Zhang et al [29] presented one of the deepest CNNs used for SR, composed of 400 layers.…”
Section: Related Workmentioning
confidence: 99%
“…The adjacent band number K is set to 64, the downsample ratio r is set to 10 as in [12] and the trade-off parameter λ is equal to 10 during all the training procedure. We use the truncated normal distribution to initialize the weights and train the network from scratch.…”
Section: Implementation Detailsmentioning
confidence: 99%