2022
DOI: 10.48550/arxiv.2208.04303
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Boosting neural video codecs by exploiting hierarchical redundancy

Abstract: In video compression, coding efficiency is improved by reusing pixels from previously decoded frames via motion and residual compensation. We define two levels of hierarchical redundancy in video frames: 1) first-order: redundancy in pixel space, i.e., similarities in pixel values across neighboring frames, which is effectively captured using motion and residual compensation, 2) second-order: redundancy in motion and residual maps due to smooth motion in natural videos. While most of the existing neural video … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 19 publications
0
1
0
Order By: Relevance
“…Many subsequent works also adopt this residual coding-based framework and refine the modules therein. For example, [31, 43,47] proposed motion prediction to further reduce redundancy. Optical flow estimation in scale-space [1] was designed to handle complex motion.…”
Section: Neural Video Compressionmentioning
confidence: 99%
“…Many subsequent works also adopt this residual coding-based framework and refine the modules therein. For example, [31, 43,47] proposed motion prediction to further reduce redundancy. Optical flow estimation in scale-space [1] was designed to handle complex motion.…”
Section: Neural Video Compressionmentioning
confidence: 99%