2023
DOI: 10.1109/tmm.2022.3219646
|View full text |Cite
|
Sign up to set email alerts
|

DDistill-SR: Reparameterized Dynamic Distillation Network for Lightweight Image Super-Resolution

Abstract: Recent research on deep convolutional neural networks (CNNs) has provided a significant performance boost on efficient super-resolution (SR) tasks by trading off the performance and applicability. However, most existing methods focus on subtracting feature processing consumption to reduce the parameters and calculations without refining the immediate features, which leads to inadequate information in the restoration. In this paper, we propose a lightweight network termed DDistill-SR, which significantly improv… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
14
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 11 publications
(14 citation statements)
references
References 59 publications
0
14
0
Order By: Relevance
“…Table 1 summarizes the quantitative comparison between the proposed F2SRGAN with current state-of-the-art lightweight SISR methods, including DDistill-SR [29], FDSCSR-S [30], FRN [31], HNCT [32], HPINet-M [33], LatticeNet [34], LESRCNN [35], OverNet [36], SCET [37], SRRFN-BI [38], URN [39], and SwiftSRGAN [1]. We report PI and PSNR, alongside performance metrics, which are total numbers of parameters and multiply-adds counter on the upscale factor of 2 and 4.…”
Section: Resultsmentioning
confidence: 99%
“…Table 1 summarizes the quantitative comparison between the proposed F2SRGAN with current state-of-the-art lightweight SISR methods, including DDistill-SR [29], FDSCSR-S [30], FRN [31], HNCT [32], HPINet-M [33], LatticeNet [34], LESRCNN [35], OverNet [36], SCET [37], SRRFN-BI [38], URN [39], and SwiftSRGAN [1]. We report PI and PSNR, alongside performance metrics, which are total numbers of parameters and multiply-adds counter on the upscale factor of 2 and 4.…”
Section: Resultsmentioning
confidence: 99%
“…As shown in Figure 1, our network significantly outperforms the state-of-the-art (SOTA) efficient models such as DDistill-SR (Wang et al, 2022) or SAFMN (Sun et al, 2023) by a considerable margin, while utilizing only half or even less of the GMACS. Although our model is specifically designed for efficient SR, its scalability is evident as our larger model surpasses the SOTA lightweight transformer in performance while incurring lower computational costs.…”
Section: Introductionmentioning
confidence: 86%
“…We present quantitative results for ×2, ×3, and ×4 image SR, comparing against current efficient state-of-the-art models in Table 1, including CARN-M (Ahn et al, 2018), IMDN (Hui et al, 2019), PAN (Zhao et al, 2020), DR-SAN (Park et al, 2021), DDistill-SR (Wang et al, 2022), ShuffleMixer (Sun et al, 2022), and SAFMN (Sun et al, 2023). Additionally, we evaluate against lightweight variants of popular Transformer-based SR models such as SwinIR (Liu et al, 2021), ELAN (Zhang et al, 2022), and SRFormer (Zhou et al, 2023) may suffer blurring artifacts, distortions, or inaccurate texture restoration.…”
Section: Comparison To State-of-the-art Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…An SRB is additionally introduced, the core component of RFDN, allowing the network to efficiently leverage the residual learning while maintaining its lightweight nature. Inspired by RFDN, another approach has been recently proposed, known as a reparametrized dynamic distillation Network (Wang et al, 2022), which introduces reparametrized dynamic unit (RDU) and dynamic distillation fusion (DDF) to raise the performance and to qualify dynamic signals collection to tradeoff the running cost, respectively. The importance of SR can also be seen in segmentation for example, an adaptive multi‐scale dual attention network (Wang, Wang, et al, 2021) is proposed for semantic segmentation.…”
Section: Related Workmentioning
confidence: 99%