Proceedings of the 30th ACM International Conference on Multimedia 2022
DOI: 10.1145/3503161.3547845
|View full text |Cite
|
Sign up to set email alerts
|

Hybrid Spatial-Temporal Entropy Modelling for Neural Video Compression

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
71
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
3

Relationship

0
6

Authors

Journals

citations
Cited by 41 publications
(71 citation statements)
references
References 28 publications
0
71
0
Order By: Relevance
“…Li et al proposed learning feature domain contexts as condition. Its following works [29,50] adopt feature propagation to boost performance.…”
Section: Neural Video Compressionmentioning
confidence: 99%
See 4 more Smart Citations
“…Li et al proposed learning feature domain contexts as condition. Its following works [29,50] adopt feature propagation to boost performance.…”
Section: Neural Video Compressionmentioning
confidence: 99%
“…At the same time, F t is also generated and propagated to the next frame. It is noted that, our framework is based on [29]. When compared with [29], this paper redesigns the modules to exploit diverse contexts from both temporal (Section 3.2 and 3.3) and spatial (Section 3.4) dimensions.…”
Section: Overviewmentioning
confidence: 99%
See 3 more Smart Citations