2021
DOI: 10.1016/j.jpdc.2021.02.016
|View full text |Cite
|
Sign up to set email alerts
|

Image super-resolution via enhanced multi-scale residual network

Abstract: Recently, a very deep convolutional neural network (CNN) has achieved impressive results in image super-resolution (SR). In particular, residual learning techniques are widely used. However, the previously proposed residual block can only extract one single-level semantic feature maps of one single receptive field. Therefore, it is necessary to stack the residual blocks to extract higher-level semantic feature maps, which will significantly deepen the network. While a very deep network is hard to train and lim… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
8

Relationship

1
7

Authors

Journals

citations
Cited by 9 publications
(7 citation statements)
references
References 47 publications
0
5
0
Order By: Relevance
“…To prove the effectiveness and to advance the method in this paper, we selected recent deep learning methods for comparison with our method, including SRCNN [9], VDSR [11], EDSR [28], fast, accurate, and lightweight super‐resolution with cascading residual network (CARN) [52], IDN [53], RCAN [29], RDN [37], RFDN [54], SMSR [55], NLSN [39], BSRN [40], DRMSFFN [56], and EMRN [57]. The comparisons were made at two, three, and four times the scale, and the results are shown in Table 1.…”
Section: Methodsmentioning
confidence: 99%
“…To prove the effectiveness and to advance the method in this paper, we selected recent deep learning methods for comparison with our method, including SRCNN [9], VDSR [11], EDSR [28], fast, accurate, and lightweight super‐resolution with cascading residual network (CARN) [52], IDN [53], RCAN [29], RDN [37], RFDN [54], SMSR [55], NLSN [39], BSRN [40], DRMSFFN [56], and EMRN [57]. The comparisons were made at two, three, and four times the scale, and the results are shown in Table 1.…”
Section: Methodsmentioning
confidence: 99%
“…Yang et al [ 43 ] introduced convolutional residual in DenseNet by parallelizing multiple multiresolution convolutional residuals in dense block, effectively reducing the number of superresolution network parameters, and increasing model hierarchy and network depth, with prediction speed reaches 25 ms per image under certain accuracy loss. Single-branch convolutional residual block can only extract single-level semantic information; Wang et al [ 44 ] designed multiscale residual network (EMRN) in image superresolution task and densely connected multiscale residual blocks extracted image hierarchical features to achieve multilevel semantic information in different perceptual domains.…”
Section: Development Of Densenetmentioning
confidence: 99%
“…In order to deal with special domain application such as extreme motion blur and recover sharp high-resolution (HR) image [22], thermal image [23][24], X-ray image [25], 3D image SR [26], Stereo Image [27], Remote Sensing Image [18] and so on. Multi-level image SR, such as Pyramidal structure [28], Multi-Scale SR [29]- [31], GAN(Generative neural network) [32], RNN(Recurrent neural network) [33], Cascade [17][18] and so on.…”
Section: Related Researchmentioning
confidence: 99%
“…Among the current methods, residual learning and dense connection are the most import structure. Residual learning [26][35]- [37] is constructed between the input low resolution image and the final output high-resolution images. This connection can also be called global connection.…”
Section: Related Researchmentioning
confidence: 99%