2021
DOI: 10.1016/j.cviu.2020.103097
|View full text |Cite
|
Sign up to set email alerts
|

FIFNET: A convolutional neural network for motion-based multiframe super-resolution using fusion of interpolated frames

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
14
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
2

Relationship

2
5

Authors

Journals

citations
Cited by 10 publications
(14 citation statements)
references
References 21 publications
0
14
0
Order By: Relevance
“…The CNN architecture we use here is based on the FIFNet. 38 The FIFNet has a lightweight architecture with relatively few layers. This allows us to train the network from scratch, without the need for transfer learning, in ∼100 min.…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…The CNN architecture we use here is based on the FIFNet. 38 The FIFNet has a lightweight architecture with relatively few layers. This allows us to train the network from scratch, without the need for transfer learning, in ∼100 min.…”
Section: Discussionmentioning
confidence: 99%
“…The advantage of this is that it allows for a smaller network and avoids extrapolation errors on the patch borders. 38 CNN updates during training are done using stochastic gradient descent with momentum optimization. Sixty four different random patches are extracted from each training image.…”
Section: Block Matching and Convolutional Neural Networkmentioning
confidence: 99%
See 1 more Smart Citation
“…To enrich the RCAN input, we also supply subpixel interpolation distances for each pixel and feed into the RCAN architecture through a separate input head. This modification is inspired by the FIFNET architecture for multi-image SR. 33 Since paired hexagonally sampled LR images and rectangularly sampled HR images are not readily available, we develop an observation model-based approach for generating realistic, full frame RGB, synthetic data. Our observation model effectively constitutes a "forward model" for the inverse problem we wish to solve.…”
Section: Introductionmentioning
confidence: 99%
“…Our observation model effectively constitutes a "forward model" for the inverse problem we wish to solve. The observation model leverages a camera-specific optical transfer function (OTF) that models diffraction and detector integration (based on either hexagonal or rectangular detectors) for generating our data 33,34 as well as a resampling step. We apply the observation model to the DIV2K SISR dataset 35 to generate data to train our models so as to be ready to process real full frame RGB color camera outputs.…”
Section: Introductionmentioning
confidence: 99%