2023
DOI: 10.1117/1.jei.32.5.053009
|View full text |Cite
|
Sign up to set email alerts
|

Optical flow-free generative and adversarial network: generative and adversarial network-based video super-resolution method by optical flow-free motion estimation and compensation

Cheng Fang,
Xueting Bian,
Ping Han
et al.

Abstract: .Except for recovering the image detail texture information, the main difference between video super-resolution (VSR) and single-image super-resolution (SR) is that VSR focuses on alleviating the deficiency of temporal coherence between video frames. Motion estimation and motion compensation is the common technique used to strengthen the temporal correlation between frames. Most motion estimation methods are based on optical flow. The optical flow method has three basic assumptions: the movement scale is small… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(2 citation statements)
references
References 21 publications
0
2
0
Order By: Relevance
“…To validate the effectiveness of the proposed DCANet, we compare our method with seve ral state-of-the-art (SOTA) real-time inference VSR models These models include VESPCN [19 ], SOFVSR [13], TecoGAN [35], FRVSR [21], EGVSR [9], STDO [20], COFGAN [28], SWR N [10], RAI [13]. Comparison with SOTA lightweight and real-time inference VSR on benchm ark dataset tested with 4x down-sampling operation at Gaussian degradation.…”
Section: Comparison With the State-of-the-art Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…To validate the effectiveness of the proposed DCANet, we compare our method with seve ral state-of-the-art (SOTA) real-time inference VSR models These models include VESPCN [19 ], SOFVSR [13], TecoGAN [35], FRVSR [21], EGVSR [9], STDO [20], COFGAN [28], SWR N [10], RAI [13]. Comparison with SOTA lightweight and real-time inference VSR on benchm ark dataset tested with 4x down-sampling operation at Gaussian degradation.…”
Section: Comparison With the State-of-the-art Methodsmentioning
confidence: 99%
“…However, optical flow-based methods need to be built on the assumptions of luminance consistency, smal l motion, and temporal coherence [5], then the optical flow estimation is prone to errors when dealing with complex environments and large-motion video scenes. To solve this problem, [25,26] adopted deformable convolution to break through the network's limitation on geometric mo deling transformations, Chan et al [27] combined variable convolution with geometric modeling transformations, and Fang et al [28] used 3D-Unet to generate motion estimation. However, t he method of motion estimation without optical flow has limitations such as high computationa l complexity and difficulty in convergence of training, which makes it unsuitable for applicatio n to real-time inference VSR.…”
Section: Optical Flow-based Vsrmentioning
confidence: 99%