2022
DOI: 10.1007/s11760-022-02406-w
|View full text |Cite
|
Sign up to set email alerts
|

Particle filter based multi-frame image super resolution

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(2 citation statements)
references
References 26 publications
0
2
0
Order By: Relevance
“…The network used three convolutional layers to represent different non-linear mapping relationships for reconstruction, and, although the network layers were shallow, they showed good performance at the time compared to traditional methods. Since then, a large number of deep learning-based methods have started to flood into super-resolution reconstruction techniques [16][17][18][19][20][21][22][23].…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…The network used three convolutional layers to represent different non-linear mapping relationships for reconstruction, and, although the network layers were shallow, they showed good performance at the time compared to traditional methods. Since then, a large number of deep learning-based methods have started to flood into super-resolution reconstruction techniques [16][17][18][19][20][21][22][23].…”
Section: Related Workmentioning
confidence: 99%
“…How to make full use of the high-low resolution information between frames is the key to video super-resolution reconstruction. Most of the existing video super-resolution algorithms [1][2][3][4][5][6][7][8][9][10][11][12][13][14][15][16][17] use the optical flow method for inter-frame alignment. These algorithms can handle the inter-frame information of the video but cannot make full use of the interframe information, especially in scenes with large inter-frame motion variations and distant targets, and there is still a need to continue to investigate how to perform better motion estimation and compensation.…”
Section: Introductionmentioning
confidence: 99%