2020 IEEE International Conference on Image Processing (ICIP) 2020
DOI: 10.1109/icip40778.2020.9190805
|View full text |Cite
|
Sign up to set email alerts
|

End-To-End Learned Image Compression With Fixed Point Weight Quantization

Abstract: End-to-end Learned image compression (LIC) has reached the traditional hand-crafted methods such as BPG (HEVC intra) in terms of the coding gain. However, the large network size prohibits the usage of LIC on resource-limited embedded systems. This paper reduces the network complexity by quantizing both weights and activations. 1) For the weight quantization, we study different kinds of grouping and quantization scheme at first. A channel-wise non-linear quantization scheme is determined based on the coding gai… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 11 publications
(3 citation statements)
references
References 43 publications
0
3
0
Order By: Relevance
“…The above architecture [4] has been then considered as a reference model in many other deep learning based image compression algorithms [5,8,25,47]. Among them, we retain here the method proposed in [5].…”
Section: End-to-end Learned Image Compression Modelsmentioning
confidence: 99%
“…The above architecture [4] has been then considered as a reference model in many other deep learning based image compression algorithms [5,8,25,47]. Among them, we retain here the method proposed in [5].…”
Section: End-to-end Learned Image Compression Modelsmentioning
confidence: 99%
“…[37] developed a very heuristic method to train an integer LIC from scratch. [38] proposed a weight clipping method to reduce the weight quantization error, and its advanced version with a layer-by-layer weight fine tuning was presented in [39]. [40] proposed a range preprocessing to bound the dynamic range, and then performed a range-adaptive quantization.…”
Section: Introductionmentioning
confidence: 99%
“…[40] proposed a range preprocessing to bound the dynamic range, and then performed a range-adaptive quantization. For [38], [39], though weight can be quantized with small coding loss, the activation in the main path is not taken into consideration for the quantization. For [37], [40], both weights and activations in the main and hyper path are quantized.…”
Section: Introductionmentioning
confidence: 99%