2022
DOI: 10.1007/978-3-031-19809-0_32
|View full text |Cite
|
Sign up to set email alerts
|

Neural Video Compression Using GANs for Detail Synthesis and Propagation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
19
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 28 publications
(33 citation statements)
references
References 23 publications
0
19
0
Order By: Relevance
“…These latter metrics express distance in terms of the output of features from an Inception network. They have recently increased in importance for perceptual evaluation of picture quality as used [24]. In order to select the optimal value of Λ, the table reports performance for Copt under the effect of varying Λ on a log scale.…”
Section: Resultsmentioning
confidence: 99%
“…These latter metrics express distance in terms of the output of features from an Inception network. They have recently increased in importance for perceptual evaluation of picture quality as used [24]. In order to select the optimal value of Λ, the table reports performance for Copt under the effect of varying Λ on a log scale.…”
Section: Resultsmentioning
confidence: 99%
“…Hence, recent deep-framework methods [? ], [6]- [8], [37]- [51] propose to construct end-to-end DLVC frameworks. Although DLVC methods can leverage the benefits of endto-end learning strategy, these methods still remove temporal redundancy by only referring to one or limited neighboring frames, which limits its performance improvement.…”
Section: Related Workmentioning
confidence: 99%
“…Besides, Lu et al further presented an online encoder updating scheme [40] from the perspective of content adaptation and error propagation. Afterwards, a series of end-to-end video compression algorithms [41], [42], [43], [44], [45], [46], [47], [48] were put forward to improve the coding performance. More specifically, Hu et al [41] proposed an end-to-end video compression framework by converting the input video to the latent code representation.…”
Section: Learning-based Video Codingmentioning
confidence: 99%
“…More specifically, Hu et al [41] proposed an end-to-end video compression framework by converting the input video to the latent code representation. The relevant techniques of recurrent learning [42] and adversarial learning [45], [47] were also introduced in the end-to-end compression framework. These learning-based compression algorithms could achieve promising coding efficiency in universal scenes, while there is still room for improvement considering the specific application scenarios such as talking face videos.…”
Section: Learning-based Video Codingmentioning
confidence: 99%