2012
DOI: 10.1109/tcsvt.2012.2201669
|View full text |Cite
|
Sign up to set email alerts
|

Video Super-Resolution Using Codebooks Derived From Key-Frames

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
41
0
12

Year Published

2013
2013
2017
2017

Publication Types

Select...
3
3
1

Relationship

0
7

Authors

Journals

citations
Cited by 38 publications
(54 citation statements)
references
References 34 publications
1
41
0
12
Order By: Relevance
“…We would, however, like to add to the framework a motion estimation step as in [86], in order to take advantage of the already reconstructed frames and enforce temporal consistency.…”
Section: Weight Computation Methods To Provide Temporal Consistencymentioning
confidence: 99%
See 3 more Smart Citations
“…We would, however, like to add to the framework a motion estimation step as in [86], in order to take advantage of the already reconstructed frames and enforce temporal consistency.…”
Section: Weight Computation Methods To Provide Temporal Consistencymentioning
confidence: 99%
“…We then propose a new weight computation method for NE according to the principle described above, where the closest patches are found by motion estimation as in [86]. The motion estimation process is done for every overlapping patch: for a given LR input patch in a non-key frame x l i , we can then nd the two LR patches in the two neighbor key frames, pointed by the motion vectors …”
Section: Weight Computation Methods To Provide Temporal Consistencymentioning
confidence: 99%
See 2 more Smart Citations
“…[1] proposes a different framework of down-sampling based coding with super resolution technique, where a superresolution technique is employed to restore the downsampled frames to their original resolutions. [2] extends a multiresolution approach to example-based super-resolution and discuss codebook construction for video sequences. In [3], the authors apply the example-based SR algorithm to restore the down-sampled frames.…”
Section: Introductionmentioning
confidence: 99%