2022
DOI: 10.1016/j.image.2022.116633
|View full text |Cite
|
Sign up to set email alerts
|

Learning to compress videos without computing motion

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
4
1
1

Relationship

1
5

Authors

Journals

citations
Cited by 6 publications
(4 citation statements)
references
References 34 publications
0
4
0
Order By: Relevance
“…Figure 1 illustrates the overall architecture of our network, which extends our previous MOVI-Codec [12]. The compression network is comprised of four components: a Displacement Calculation Unit (DCU), a Displacement Compression Network (DCN), a Foveation Generator Unit (FGU), and a Frame Reconstruction Network (FRN).…”
Section: A Frameworkmentioning
confidence: 99%
See 2 more Smart Citations
“…Figure 1 illustrates the overall architecture of our network, which extends our previous MOVI-Codec [12]. The compression network is comprised of four components: a Displacement Calculation Unit (DCU), a Displacement Compression Network (DCN), a Foveation Generator Unit (FGU), and a Frame Reconstruction Network (FRN).…”
Section: A Frameworkmentioning
confidence: 99%
“…The quantized map in x axis is shown in Figure 4. After a set of n masks M (P ) are generated, we element-wise multiply M (P ) and the encoder output y t to obtain quantized spatially variant (foveated) codes c t which are then subjected to entropy coding and bitrate estimation, using the same procedure as [12].…”
Section: Foveation Generator Unit (Fgu)mentioning
confidence: 99%
See 1 more Smart Citation
“…Recent works have introduced more sophisticated components, e.g. Golinski et al [15] ture space in NVC [20], and Chen et al [9] replaced optical flow and warping by displaced frame differences. B-frame coding: Wu et al [43] introduced one of the pioneering neural video codecs via frame interpolation that was facilitated by context information.…”
Section: Related Workmentioning
confidence: 99%