2022
DOI: 10.1155/2022/2213273
|View full text |Cite|
|
Sign up to set email alerts
|

DeepCompNet: A Novel Neural Net Model Compression Architecture

Abstract: The emergence of powerful deep learning architectures has resulted in breakthrough innovations in several fields such as healthcare, precision farming, banking, education, and much more. Despite the advantages, there are limitations in deploying deep learning models in resource-constrained devices due to their huge memory size. This research work reports an innovative hybrid compression pipeline for compressing neural networks exploiting the untapped potential of z-score in weight pruning, followed by quantiza… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 13 publications
(4 citation statements)
references
References 19 publications
(17 reference statements)
0
4
0
Order By: Relevance
“…We also call it as early pruning. In the network pruning, we remove the neurons of the network that are low in weight [228][229][230][231][232][233], while in hybrid pruning, we fuse the process of weight reduction using temporal and spatial information [234][235][236].…”
Section: Pruning Strategies In Unet-based Deep Learningmentioning
confidence: 99%
“…We also call it as early pruning. In the network pruning, we remove the neurons of the network that are low in weight [228][229][230][231][232][233], while in hybrid pruning, we fuse the process of weight reduction using temporal and spatial information [234][235][236].…”
Section: Pruning Strategies In Unet-based Deep Learningmentioning
confidence: 99%
“…The DeepCompNet [266,267] algorithm demonstrated quantization utilising the densitybased clustering algorithm (DBSCAN), compressing neural networks with Z-score in weight pruning, and Huffman encoding. For devices with limited resources, the Z-score method provides a straightforward and useful architecture.…”
Section: Pruning Training Modelsmentioning
confidence: 99%
“…Two different approaches are used to this end. The first of these is thinning or compression [1]. With the help of these methods, it is possible to achieve a reduction in computational costs by 30-50 times, without degrading the quality of neural networks [2] or even improving [3].…”
Section: Introductionmentioning
confidence: 99%