2023
DOI: 10.1016/j.knosys.2023.110386
|View full text |Cite
|
Sign up to set email alerts
|

Iterative clustering pruning for convolutional neural networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
1
0

Year Published

2023
2023
2025
2025

Publication Types

Select...
8

Relationship

0
8

Authors

Journals

citations
Cited by 20 publications
(6 citation statements)
references
References 10 publications
0
1
0
Order By: Relevance
“…The efficacy of our proposed algorithm is demonstrated through a representative example in Subfigure 2a. The algorithm detects the saliency of filters, yielding a sequence of indices such as [5,14,7,13,4,1,15,6,10,0,3,8,9,2,12,11]. This sequence effectively identifies the most (index 0, darkest-blue) and least (index 9, brightest-yellow) important filters.…”
Section: Filters Selectionmentioning
confidence: 99%
See 2 more Smart Citations
“…The efficacy of our proposed algorithm is demonstrated through a representative example in Subfigure 2a. The algorithm detects the saliency of filters, yielding a sequence of indices such as [5,14,7,13,4,1,15,6,10,0,3,8,9,2,12,11]. This sequence effectively identifies the most (index 0, darkest-blue) and least (index 9, brightest-yellow) important filters.…”
Section: Filters Selectionmentioning
confidence: 99%
“…Concerning pruning strategy, there exist two main approaches: one-shot pruning [38,63,79] and multi-shot pruning [4,14]. With the first one, the entire network is pruned in a single shot, and the filters or weights in each layer are either preserved or pruned at once.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Pruning techniques were initially employed in decision tree models and later extended to neural network models [25]. Trough appropriate pruning, the computational structures of neural networks are streamlined, enabling them to eliminate the infuence of redundant parameters and enhance their generalization performance [26,27]. However, due to current limitations in neural network computation methods, pruning methods require trained models.…”
Section: Related Workmentioning
confidence: 99%
“…Its widespread adoption In fact, the computational burden of the CNN primarily lies in the convolutional and fully connected layers, as the number of parameters and computational complexity in these two layers determine the model's computation and convergence speed. To address these challenges, various model compression and acceleration techniques have been proposed, such as model pruning [15], model quantization [16], and low-rank decomposition [17], taking into account both model performance and parameter reduction. However, these techniques for large networks are often complex and challenging to implement.…”
Section: Introductionmentioning
confidence: 99%