Proceedings of the 29th ACM International Conference on Multimedia 2021
DOI: 10.1145/3474085.3475288
|View full text |Cite
|
Sign up to set email alerts
|

Boosting Lightweight Single Image Super-resolution via Joint-distillation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 6 publications
(3 citation statements)
references
References 17 publications
0
3
0
Order By: Relevance
“…In contrast, self-distillation needs no extra network except for the network itself. While self-distillation has been successfully applied in computer vision and natural language processing [50]- [52], it focuses on unimodal tasks.…”
Section: Knowledge Distillationmentioning
confidence: 99%
“…In contrast, self-distillation needs no extra network except for the network itself. While self-distillation has been successfully applied in computer vision and natural language processing [50]- [52], it focuses on unimodal tasks.…”
Section: Knowledge Distillationmentioning
confidence: 99%
“…After SRKD, PISR [16] and JDSR [17] were developed. PISR uses the high-frequency components of the HR image as privileged information to generate a high-performance teacher model, and then transfers knowledge from the teacher model to the student model by measuring the feature difference between the teacher and student models in an embedding space with Gaussian or Laplacian distribution.…”
Section: Knowledge Distillation In Sr Domainmentioning
confidence: 99%
“…Therefore, we have focused on applying KD (which is less dependent on hardware) to SR models. SRKD [15], PISR [16], and JDSR [17] are the examples that apply KD to the SR domain. As an initial study case, SRKD used feature distillation to learn the feature distribution of the teacher model so that the student model resembles it.…”
Section: Introductionmentioning
confidence: 99%