2023
DOI: 10.1109/access.2023.3329717
|View full text |Cite
|
Sign up to set email alerts
|

Speed Up VVC Intra-Coding by Learned Models and Feature Statistics

Jiann-Jone Chen,
Yeh-Guan Chou,
Chi-Shiun Jiang

Abstract: The newest video coding standard, Versatile Video Coding (VVC), adopts a QTMT, quadtree plus multi-type tree (MTT), block partition structure and improves the compression performance by about 30%∼50%, compared with the previous High-Efficiency Video Coding (HEVC) one, at the cost of higher time complexity. To make practical video communication applications feasible, we have to reduce the high time complexity resulting from an exhaustive rate-distortion optimization (RDO) search procedure. We proposed to predic… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
0
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 21 publications
0
0
0
Order By: Relevance
“…Wu et al proposed a hierarchical grid fully convolutional network to predict the QTMT partition structure for fast VVC intra coding [27]. Chen et al proposed to predict CU partition modes for VVC intra coding by a CNN model, which is trained with the neighboring line pixels and quantization parameters [28]. In [29], heuristic and deep learning methods were combined.…”
Section: Deep Learning Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Wu et al proposed a hierarchical grid fully convolutional network to predict the QTMT partition structure for fast VVC intra coding [27]. Chen et al proposed to predict CU partition modes for VVC intra coding by a CNN model, which is trained with the neighboring line pixels and quantization parameters [28]. In [29], heuristic and deep learning methods were combined.…”
Section: Deep Learning Methodsmentioning
confidence: 99%
“…A quantitative analysis comparing our model with deep learning-based methods is presented in Table 9. This includes methods from Zan 2023 [32], Wu 2022 [27], and Chen 2023 [28]. It can be seen that our algorithm on top of VTM 23.1 achieves the maximum time saving of 69.85% with the minimum BDBR loss of 1.65% (the time saving of the proposed method on top VTM 7.0 is slightly reduced to 66.69%).…”
Section: Comparison With Othersmentioning
confidence: 95%