2021
DOI: 10.1007/978-3-030-69532-3_17
|View full text |Cite
|
Sign up to set email alerts
|

Lightweight Single-Image Super-Resolution Network with Attentive Auxiliary Feature Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
24
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 24 publications
(24 citation statements)
references
References 41 publications
0
24
0
Order By: Relevance
“…The BI degradation model has been widely used to obtain LR images in the image SR tasks. In order to demonstrate the effectiveness of the RTAN, we compared it with 16 state-of-the-art CNN-based SR methods, including SRMDNF [7], NLRN [32], EDSR [17], DBPN [73], NDRCN [35], ACNet [38], FALSR-A [37], OISR-RK2-s [34], MCAN [47], A 2 F-SD [48], A2N-M [63], DeFiAN S [61], IMDN [33], SMSR [36], PAN [59], MGAN [62], RNAN [55].…”
Section: Results With Bicubic (Bi) Degradation Modelmentioning
confidence: 99%
See 2 more Smart Citations
“…The BI degradation model has been widely used to obtain LR images in the image SR tasks. In order to demonstrate the effectiveness of the RTAN, we compared it with 16 state-of-the-art CNN-based SR methods, including SRMDNF [7], NLRN [32], EDSR [17], DBPN [73], NDRCN [35], ACNet [38], FALSR-A [37], OISR-RK2-s [34], MCAN [47], A 2 F-SD [48], A2N-M [63], DeFiAN S [61], IMDN [33], SMSR [36], PAN [59], MGAN [62], RNAN [55].…”
Section: Results With Bicubic (Bi) Degradation Modelmentioning
confidence: 99%
“…In order to demonstrate the powerful reconstruction ability of the proposed method with BD degradation model, we compare the RTAN with 14 state-of-the-art CNN-based models, i.e., SPMSR [4], SRCNN [5], FSRCNN [74], VDSR [12], SRMD [7], EDSR [17], RDN [16], IRCNN [75], SRFBN [76], RCAN [6], A 2 F-SD [48], IMDN [33], DeFiAN S [61], PAN [59], and MGAN [62].…”
Section: Results With Blur-downscale (Bd) Degradation Modelmentioning
confidence: 99%
See 1 more Smart Citation
“…Besides that, Wang et al [32] proposed local and global feature-fusion modules within and across residual blocks, respectively, which is efficient and can effectively improve the performance. Instead of directly fusing features extracted from intermediate layers through summation, Wang et al [33] proposed to project the extracted features into a shared space, and then fuse them in this common space for the reconstruction. The information distillation block proposed by IDN [14] is another efficient compact module for SISR, which splits the features generated into two parts in a residual block.…”
Section: Related Work 21 Deep Lightweight Sisr Modelsmentioning
confidence: 99%
“…Wang et al [32] proposed a local feature-fusion strategy within the residual block and a global feature-fusion strategy across the residual blocks to improve network efficiency. Wang et al [33] proposed to project the generated features from the intermediate layers into a shared space for feature fusion, which can further improve the performance. The information distillation block proposed in [13,14] is another efficient and compact module, which can progressively extract features for the reconstruction, leading to a better trade-off between performance and model parameters.…”
Section: Introductionmentioning
confidence: 99%