2021
DOI: 10.1109/access.2021.3049470
|View full text |Cite
|
Sign up to set email alerts
|

Compressing Convolutional Neural Networks by Pruning Density Peak Filters

Abstract: With the recent development of GPUs, the depth of convolutional neural networks (CNNs) has increased, and its structure has become complex. Hence, it is challenging to deploy them into a hardware device owing to its immense computational cost and memory for storage parameters. We propose a method of pruning a filter located near the density peak, which grasps the density of the filter space for each layer to overcome this problem. The density is calculated in the filter space based on the number of neighboring… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6

Relationship

0
6

Authors

Journals

citations
Cited by 9 publications
(2 citation statements)
references
References 20 publications
0
2
0
Order By: Relevance
“…Choi et al [ 48 ] used SHAP to demonstrate the high-risk factors responsible for higher phosphate. Further, to improve the speed of the AI model, model optimization techniques such as weight clustering and AI model pruning [ 160 , 161 , 162 , 163 , 164 ] can be applied [ 115 , 165 , 166 , 167 , 168 , 169 ]. Techniques such as storage reduction are necessary when dealing with AI solutions [ 51 , 54 , 170 , 171 , 172 ].…”
Section: Discussionmentioning
confidence: 99%
“…Choi et al [ 48 ] used SHAP to demonstrate the high-risk factors responsible for higher phosphate. Further, to improve the speed of the AI model, model optimization techniques such as weight clustering and AI model pruning [ 160 , 161 , 162 , 163 , 164 ] can be applied [ 115 , 165 , 166 , 167 , 168 , 169 ]. Techniques such as storage reduction are necessary when dealing with AI solutions [ 51 , 54 , 170 , 171 , 172 ].…”
Section: Discussionmentioning
confidence: 99%
“…The current pruning literature has been classified into three categories: (i) channel pruning (socalled filter pruning), (ii) network pruning, and (iii) hybrid pruning. The main principle of channel pruning is to cut down the filters at an early stage of the AI model design [218][219][220][221][222][223][224][225][226][227]. We also call it as early pruning.…”
Section: Pruning Strategies In Unet-based Deep Learningmentioning
confidence: 99%