2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2021
DOI: 10.1109/cvpr46437.2021.00215
|View full text |Cite
|
Sign up to set email alerts
|

Space-Time Distillation for Video Super-Resolution

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
7
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
4
4
1

Relationship

1
8

Authors

Journals

citations
Cited by 38 publications
(7 citation statements)
references
References 37 publications
0
7
0
Order By: Relevance
“…1) Lightening VSR Model with Knowledge Distillation. There is a considerable performance gap between the lightweight VSR model and the normally used complex VSR model, while the latter one requires a much larger amount of resources (Xiao et al 2021). This problem is particularly acute on resource-limited devices, e.g., smartphones and wearable devices.…”
Section: Discussionmentioning
confidence: 99%
“…1) Lightening VSR Model with Knowledge Distillation. There is a considerable performance gap between the lightweight VSR model and the normally used complex VSR model, while the latter one requires a much larger amount of resources (Xiao et al 2021). This problem is particularly acute on resource-limited devices, e.g., smartphones and wearable devices.…”
Section: Discussionmentioning
confidence: 99%
“…In [31], the famous two-stream architecture is devised by applying two 2D CNN architectures separately to RGB frames and optical flows for action recognition. The idea of two-steam architecture is also explored from the perspective of knowledge distillation [5,33,45]. Moreover, Wang et al [40] devise a self-supervised pretext task by estimating the motion in unlabeled videos.…”
Section: Related Workmentioning
confidence: 99%
“…Lee et al [37] proposed to leverage the privileged information from ground-truth images and distill the knowledge by minimizing the distance between the features of the teacher network and the student network. Xiao et al [38] proposed an effective knowledge-distillation method for video super-resolution, which enforces the spatial and temporal characteristics of the teacher network and student network to be consistent. All these distillation-based super-resolution models require the network topology of the teacher network and the student network to be consistent.…”
Section: Knowledge Transfer For Image and Video Super-resolutionmentioning
confidence: 99%
“…We further compare our method with two deep superresolution models, i.e, FAKD [36] and FDVDNet [38], which are based on knowledge distillation. The compared methods distill prior knowledge from the teacher network based on the feature map generated from the intermediate layers.…”
Section: E Ablation Study On Knowledge Transfermentioning
confidence: 99%