2018 17th International Symposium INFOTEH-JAHORINA (INFOTEH) 2018
DOI: 10.1109/infoteh.2018.8345545
|View full text |Cite
|
Sign up to set email alerts
|

Compression of convolutional neural networks: A short survey

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
10
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 8 publications
(10 citation statements)
references
References 7 publications
0
10
0
Order By: Relevance
“…The first weight pruning method i.e. Optimal Brain Damage (OBD) is introduced in [10]. It removes weights based on their saliency measure in an iterative manner.…”
Section: A Network Compression Using Pruning Methodsmentioning
confidence: 99%
“…The first weight pruning method i.e. Optimal Brain Damage (OBD) is introduced in [10]. It removes weights based on their saliency measure in an iterative manner.…”
Section: A Network Compression Using Pruning Methodsmentioning
confidence: 99%
“…Other networks like BinaryConnect [36] or BinaryNet [37] directly train models with 1-bit integer weights and are a type of deep neural networks in themselves called Binarized Neural Networks or BNNs. Another one of the most appealing advantages of this method is that accuracy can be maintained or if not, the accuracy drop is in most cases less than 1% with respect to the original one [38]. There are two main types of quantization: linear or uniform and non-linear or non-uniform [39].…”
Section: Quantizationmentioning
confidence: 99%
“…The four methods explained in prior subsections compress deep neural networks and reduce the number of parameters, but their architecture, the number and orders of the layers are remaining the same. A way to reduce the weights in a CNN is using filter decomposition which consists of replacing big convolutional layers by more convolution layers and smaller filters [38].…”
Section: Design Of Smaller Architecturesmentioning
confidence: 99%
See 1 more Smart Citation
“…Hardware accelerators [141,192] are designed primarily for network acceleration. They include specialized cen-CNN Acceleration [39,38,133,128,184,253,172] Network Optimization…”
Section: Introductionmentioning
confidence: 99%