2022
DOI: 10.1109/lsp.2022.3221350
|View full text |Cite
|
Sign up to set email alerts
|

Progressive Motion Context Refine Network for Efficient Video Frame Interpolation

Abstract: Recently, flow-based frame interpolation methods have achieved great success by first modeling optical flow between target and input frames, and then building synthesis network for target frame generation. However, above cascaded architecture can lead to large model size and inference delay, hindering them from mobile and real-time applications. To solve this problem, we propose a novel Progressive Motion Context Refine Network (PMCRNet) to predict motion fields and image context jointly for higher efficiency.… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
0
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(1 citation statement)
references
References 36 publications
(82 reference statements)
0
0
0
Order By: Relevance
“…Lee et al [30] proposed a method called "adaptive collaboration of flows (AdaCoF)" that can generate a target pixel by referring to a variable number of pixels from any position. Kong et al [31] introduced a Progressive Motion Context Refine Network for efficient frame interpolation. The network predicts motion fields and image context jointly, simplifying the task by reusing existing textures from adjacent input frames.…”
Section: B Video Temporal Super-resolutionmentioning
confidence: 99%
“…Lee et al [30] proposed a method called "adaptive collaboration of flows (AdaCoF)" that can generate a target pixel by referring to a variable number of pixels from any position. Kong et al [31] introduced a Progressive Motion Context Refine Network for efficient frame interpolation. The network predicts motion fields and image context jointly, simplifying the task by reusing existing textures from adjacent input frames.…”
Section: B Video Temporal Super-resolutionmentioning
confidence: 99%