2020
DOI: 10.1007/978-3-030-58539-6_37
|View full text |Cite
|
Sign up to set email alerts
|

Optical Flow Distillation: Towards Efficient and Stable Video Style Transfer

Abstract: Video style transfer techniques inspire many exciting applications on mobile devices. However, their efficiency and stability are still far from satisfactory. To boost the transfer stability across frames, optical flow is widely adopted, despite its high computational complexity, e.g. occupying over 97% inference time. This paper proposes to learn a lightweight video style transfer network via knowledge distillation paradigm. We adopt two teacher networks, one of which takes optical flow during inference while… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
21
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 49 publications
(23 citation statements)
references
References 41 publications
0
21
0
Order By: Relevance
“…Finally, experimental results show up to 111% and 98% relative improvements in Deception Rate and human preference, respectively. Future work can adopt our new loss in related problems, e.g., video [1,3,5,12,18,29,40,41] and photo [29,35,38,53] stylization, texture synthesis [10,11,26,51], etc. Future work can also derive tighter bounds for the style loss to improve style-aware normalization.…”
Section: Discussionmentioning
confidence: 99%
“…Finally, experimental results show up to 111% and 98% relative improvements in Deception Rate and human preference, respectively. Future work can adopt our new loss in related problems, e.g., video [1,3,5,12,18,29,40,41] and photo [29,35,38,53] stylization, texture synthesis [10,11,26,51], etc. Future work can also derive tighter bounds for the style loss to improve style-aware normalization.…”
Section: Discussionmentioning
confidence: 99%
“…Video stylization aims to transfer the style of a reference image to a sequence of video frames. To address the temporal flickering issue produced by the image stylization approaches, numerous approaches [3,5,8,12,14] incorpo-rate optical flow modules to train feed-forward networks for transferring a particular style to the videos. Several recent frameworks [7,9,52] enable the video style transfer to arbitrary styles.…”
Section: Related Workmentioning
confidence: 99%
“…Following up on [12] we formulate the hypothesis on stable stylized/processed videos: non-occluded regions should be low-rank representations. Consider during training (i) a sequence of k consecutive frames, {I t , .…”
Section: Temporal Lossesmentioning
confidence: 99%
“…Based on the formulated hypothesis, the rank of χ constructed using the raw input frames I t , χ I and using the output frames of the model O t , χ O should not be too different from each other. [12] propose the low rank loss using the convex relaxation of the rank defined as…”
Section: Temporal Lossesmentioning
confidence: 99%