The platform will undergo maintenance on Sep 14 at about 7:45 AM EST and will be unavailable for approximately 2 hours.
2022
DOI: 10.48550/arxiv.2202.07513
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Post-Training Quantization for Cross-Platform Learned Image Compression

Abstract: It has been witnessed that learned image compression has outperformed conventional image coding techniques and tends to be practical in industrial applications. One of the most critical issues that need to be considered is the non-deterministic calculation, which makes the probability prediction cross-platform inconsistent and frustrates successful decoding. We propose to solve this problem by introducing well-developed post-training quantization and making the model inference integer-arithmetic-only, which is… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 12 publications
0
2
0
Order By: Relevance
“…As initially defined by Balle et al [11], the non-determinism problem in cross-platform scenarios cannot be avoided when arithmetic coding is used for data compression [14,15,4,2,3]. Existing methods mainly solve the nondeterminism problem by using quantization techniques, which replace uncertain float calculations with deterministic integer calculations [11,16,17,18]. Nevertheless, all these methods require more or less training steps for the model on calibration data, which makes it complicated to implement.…”
Section: Introductionmentioning
confidence: 99%
“…As initially defined by Balle et al [11], the non-determinism problem in cross-platform scenarios cannot be avoided when arithmetic coding is used for data compression [14,15,4,2,3]. Existing methods mainly solve the nondeterminism problem by using quantization techniques, which replace uncertain float calculations with deterministic integer calculations [11,16,17,18]. Nevertheless, all these methods require more or less training steps for the model on calibration data, which makes it complicated to implement.…”
Section: Introductionmentioning
confidence: 99%
“…Nevertheless, existing learned image compression (LIC) approaches typically adopt the floatingpoint format for data representation (e.g., weight, bias, activation), which not only consumes excessive amount of space-time complexity but also brings up the platform inconsistency and decoding failures (He et al, 2022). To tackle these for practical application, model quantization is usually applied to generate fixed-point (or integer) LICs Hong et al, 2020;.…”
Section: Introductionmentioning
confidence: 99%