2021 IEEE International Symposium on Circuits and Systems (ISCAS) 2021
DOI: 10.1109/iscas51556.2021.9401133
|View full text |Cite
|
Sign up to set email alerts
|

Transform-Based Feature Map Compression for CNN Inference

Abstract: To achieve higher accuracy in machine learning tasks, very deep convolutional neural networks (CNNs) are designed recently. However, the large memory access of deep CNNs will lead to high power consumption. A variety of hardware-friendly compression methods have been proposed to reduce the data transfer bandwidth by exploiting the sparsity of feature maps. Most of them focus on designing a specialized encoding format to increase the compression ratio. Differently, we observe and exploit the sparsity distinctio… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 12 publications
(3 citation statements)
references
References 35 publications
(53 reference statements)
0
3
0
Order By: Relevance
“…In these methods, conversion, quantization, and encryption are manually designed and not optimally designed, which poses a challenge to the compression of images. Several techniques such as predictive coding [19], transform-based coding [20], and VQ have been proposed for image compression. VQ-based image compression techniques are used as popular techniques for high-level, low-distortion compression compared to other methods.…”
Section: Related Workmentioning
confidence: 99%
“…In these methods, conversion, quantization, and encryption are manually designed and not optimally designed, which poses a challenge to the compression of images. Several techniques such as predictive coding [19], transform-based coding [20], and VQ have been proposed for image compression. VQ-based image compression techniques are used as popular techniques for high-level, low-distortion compression compared to other methods.…”
Section: Related Workmentioning
confidence: 99%
“…FCL maintains the same number of channels while fusing global context information features [36]. Applying the GC layer in neural networks enhances the model's comprehension of the overall image and utilization of context information [37]. The principle of the GC layer is represented as follows:…”
Section: Bfamentioning
confidence: 99%
“…[8] developed a feature channel arrangement method for the image/video-codec based feature compression framework. [9] explored transform (i.e., DCT) for more efficient intermediate deep feature compression.…”
Section: Introduction Compression For Intermediate Deep Learning Feat...mentioning
confidence: 99%