2020
DOI: 10.1109/jstsp.2020.2969554
|View full text |Cite
|
Sign up to set email alerts
|

DeepCABAC: A Universal Compression Algorithm for Deep Neural Networks

Abstract: The field of video compression has developed some of the most sophisticated and efficient compression algorithms known in the literature, enabling very high compressibility for little loss of information. Whilst some of these techniques are domain specific, many of their underlying principles are universal in that they can be adapted and applied for compressing different types of data. In this work we present DeepCABAC, a compression algorithm for deep neural networks that is based on one of the state-of-the-a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
69
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
3
1

Relationship

1
8

Authors

Journals

citations
Cited by 86 publications
(69 citation statements)
references
References 48 publications
0
69
0
Order By: Relevance
“…To obtain well-trained models that can still be employed productively, DL models have intensive memory and computational requirements due to their huge complexity and large numbers of parameters [ 193 , 194 ]. One of the fields that is characterized as data-intensive is the field of healthcare and environmental science.…”
Section: Challenges (Limitations) Of Deep Learning and Alternate Solumentioning
confidence: 99%
“…To obtain well-trained models that can still be employed productively, DL models have intensive memory and computational requirements due to their huge complexity and large numbers of parameters [ 193 , 194 ]. One of the fields that is characterized as data-intensive is the field of healthcare and environmental science.…”
Section: Challenges (Limitations) Of Deep Learning and Alternate Solumentioning
confidence: 99%
“…Figure 3g shows that the error, as expected, decreases with increasing N. However, for similar N, the error due to uniform quantization is significantly higher for normally distributed weights when compared to uniformly distributed weights. Since weight distributions in practical scenarios [55][56][57][58] are more likely to follow a normal distribution, uniform quantization can lead to significantly high inference inaccuracy. In order to mitigate the challenges associated with uniform quantization, we propose k-means clustering based quantization.…”
Section: G (S) G(s) G(s) G (S) G(s) G(s) G(s) G(s)mentioning
confidence: 99%
“…6 shows, that NNR achieves high compression even for high performance qualities and outperforms comparable methods significantly. As an example, cr = 2.98% for VGG16 NCTM, while for VGG16 BZIP cr = 7.74%, as given in [38]. The graphs also show that much higher compression ratios (far below 3%) of compressed NNs can be achieved for lossy coding scenarios, i.e.…”
Section: Compression Performancementioning
confidence: 83%
“…In the NNR standard, the quantization indices that are output by the selected quantization method and all other syntax elements are entropy coded, using DeepCABAC [38]. This method is based on context-adaptive binary arithmetic coding (CABAC) [39], [40], which was originally developed and optimized for video compression.…”
Section: Entropy Codingmentioning
confidence: 99%