Abstract:It has been witnessed that learned image compression has outperformed conventional image coding techniques and tends to be practical in industrial applications. One of the most critical issues that need to be considered is the non-deterministic calculation, which makes the probability prediction cross-platform inconsistent and frustrates successful decoding. We propose to solve this problem by introducing well-developed post-training quantization and making the model inference integer-arithmetic-only, which is… Show more
“…As initially defined by Balle et al [11], the non-determinism problem in cross-platform scenarios cannot be avoided when arithmetic coding is used for data compression [14,15,4,2,3]. Existing methods mainly solve the nondeterminism problem by using quantization techniques, which replace uncertain float calculations with deterministic integer calculations [11,16,17,18]. Nevertheless, all these methods require more or less training steps for the model on calibration data, which makes it complicated to implement.…”
The state-of-the-art neural video codecs have outperformed the most sophisticated traditional codecs in terms of rate-distortion (RD) performance in certain cases. However, utilizing them for practical applications is still challenging for two major reasons. 1) Cross-platform computational errors resulting from floating point operations can lead to inaccurate decoding of the bitstream. 2) The high computational complexity of the encoding and decoding process poses a challenge in achieving real-time performance. In this paper, we propose a real-time cross-platform neural video codec, which is capable of efficiently decoding (≈25FPS) of 720P video bitstream from other encoding platforms on a consumer-grade GPU (e.g., NVIDIA RTX 2080). First, to solve the problem of inconsistency of codec caused by the uncertainty of floating point calculations across platforms, we design a calibration transmitting system to guarantee the consistent quantization of entropy parameters between the encoding and decoding stages. The parameters that may have transboundary quantization between encoding and decoding are identified in the encoding stage, and their coordinates will be delivered by auxiliary transmitted bitstream. By doing so, these inconsistent parameters can be processed properly in the decoding stage. Furthermore, to reduce the bitrate of the auxiliary bitstream, we rectify the distribution of entropy parameters using a piecewise Gaussian constraint. Second, to match the computational limitations on the decoding side for real-time video codec, we design a lightweight model. A series of efficiency techniques, such as model pruning, motion downsampling, and arithmetic coding skipping, enable our model to achieve 25 FPS decoding speed on NVIDIA RTX 2080 GPU. Experimental results demonstrate that our model can achieve real-time decoding of 720P videos while encoding on another platform. Furthermore, the real-time model brings up to a maximum of 24.2% BD-rate improvement from the perspective of PSNR with the anchor H.265 (medium).
“…As initially defined by Balle et al [11], the non-determinism problem in cross-platform scenarios cannot be avoided when arithmetic coding is used for data compression [14,15,4,2,3]. Existing methods mainly solve the nondeterminism problem by using quantization techniques, which replace uncertain float calculations with deterministic integer calculations [11,16,17,18]. Nevertheless, all these methods require more or less training steps for the model on calibration data, which makes it complicated to implement.…”
The state-of-the-art neural video codecs have outperformed the most sophisticated traditional codecs in terms of rate-distortion (RD) performance in certain cases. However, utilizing them for practical applications is still challenging for two major reasons. 1) Cross-platform computational errors resulting from floating point operations can lead to inaccurate decoding of the bitstream. 2) The high computational complexity of the encoding and decoding process poses a challenge in achieving real-time performance. In this paper, we propose a real-time cross-platform neural video codec, which is capable of efficiently decoding (≈25FPS) of 720P video bitstream from other encoding platforms on a consumer-grade GPU (e.g., NVIDIA RTX 2080). First, to solve the problem of inconsistency of codec caused by the uncertainty of floating point calculations across platforms, we design a calibration transmitting system to guarantee the consistent quantization of entropy parameters between the encoding and decoding stages. The parameters that may have transboundary quantization between encoding and decoding are identified in the encoding stage, and their coordinates will be delivered by auxiliary transmitted bitstream. By doing so, these inconsistent parameters can be processed properly in the decoding stage. Furthermore, to reduce the bitrate of the auxiliary bitstream, we rectify the distribution of entropy parameters using a piecewise Gaussian constraint. Second, to match the computational limitations on the decoding side for real-time video codec, we design a lightweight model. A series of efficiency techniques, such as model pruning, motion downsampling, and arithmetic coding skipping, enable our model to achieve 25 FPS decoding speed on NVIDIA RTX 2080 GPU. Experimental results demonstrate that our model can achieve real-time decoding of 720P videos while encoding on another platform. Furthermore, the real-time model brings up to a maximum of 24.2% BD-rate improvement from the perspective of PSNR with the anchor H.265 (medium).
“…Nevertheless, existing learned image compression (LIC) approaches typically adopt the floatingpoint format for data representation (e.g., weight, bias, activation), which not only consumes excessive amount of space-time complexity but also brings up the platform inconsistency and decoding failures (He et al, 2022). To tackle these for practical application, model quantization is usually applied to generate fixed-point (or integer) LICs Hong et al, 2020;.…”
Quantizing floating-point neural network to its fixed-point representation is crucial for Learned Image Compression (LIC) because it ensures the decoding consistency for interoperability and reduces space-time complexity for implementation. Existing solutions often have to retrain the network for model quantization which is time consuming and impractical. This work suggests the use of Post-Training Quantization (PTQ) to directly process pretrained, off-the-shelf LIC models. We theoretically prove that minimizing the mean squared error (MSE) in PTQ is suboptimal for compression task and thus develop a novel Rate-Distortion (R-D) Optimized PTQ (RDO-PTQ) to best retain the compression performance. Such RDO-PTQ just needs to compress few images (e.g., 10) to optimize the transformation of weight, bias, and activation of underlying LIC model from its native 32-bit floating-point (FP32) format to 8-bit fixed-point (INT8) precision for fixedpoint inference onwards. Experiments reveal outstanding efficiency of the proposed method on different LICs, showing the closest coding performance to their floating-point counterparts. And, our method is a lightweight and plug-and-play approach without any need of model retraining which is attractive to practitioners.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.