2022 IEEE International Conference on Image Processing (ICIP) 2022
DOI: 10.1109/icip46576.2022.9897325
|View full text |Cite
|
Sign up to set email alerts
|

Channel-Wise Bit Allocation for Deep Visual Feature Quantization

Abstract: Intermediate deep visual feature compression and transmission is an emerging research topic, which enables a good balance among computing load, bandwidth usage and generalization ability for AI-based visual analysis in edge-cloud collaboration. Quantization and the corresponding ratedistortion optimization are the key techniques in deep feature compression. In this paper, by exploring the feature statistics and a greedy iterative algorithm, we propose a channel-wise bit allocation method for deep feature quant… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 22 publications
0
1
0
Order By: Relevance
“…In comparison, the channel with lower sensitivity would be allocated with fewer bits to guarantee that the overall bits do not exceed the given bit cost. In addition, Wang et al [22] investigated the quantization of deep features. Unlike [2,5,9,23,24], which use uniform or logarithmic quantizers to reduce the data volume, it assigns different quantization intervals to each feature channel to achieve minimal network output errors.…”
Section: A Deep Feature Compression With Codecmentioning
confidence: 99%
“…In comparison, the channel with lower sensitivity would be allocated with fewer bits to guarantee that the overall bits do not exceed the given bit cost. In addition, Wang et al [22] investigated the quantization of deep features. Unlike [2,5,9,23,24], which use uniform or logarithmic quantizers to reduce the data volume, it assigns different quantization intervals to each feature channel to achieve minimal network output errors.…”
Section: A Deep Feature Compression With Codecmentioning
confidence: 99%