2023
DOI: 10.1016/j.neucom.2023.01.014
|View full text |Cite
|
Sign up to set email alerts
|

EACP: An effective automatic channel pruning for neural networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 16 publications
(5 citation statements)
references
References 31 publications
0
3
0
Order By: Relevance
“…Table 2 shows pruning results of GoogLeNet on CIFAR-10. In all conducted cases, CORING consistently outperforms other methods [35,37,46] in every way. Therefore, inception modules can use our method to achieve high performance in comparison to cutting-edge techniques.…”
Section: Results and Analysismentioning
confidence: 81%
See 1 more Smart Citation
“…Table 2 shows pruning results of GoogLeNet on CIFAR-10. In all conducted cases, CORING consistently outperforms other methods [35,37,46] in every way. Therefore, inception modules can use our method to achieve high performance in comparison to cutting-edge techniques.…”
Section: Results and Analysismentioning
confidence: 81%
“…Comparison. CORING is compared with 44 SOTAs in the fields of structured pruning [1,3,5,6,7,11,12,15,16,18,19,23,26,29,30,31,32,34,35,36,37,38,39,40,41,43,45,46,47,48,49,56,58,63,64,66,68,73,76,78,79,80,83,85]. For a fair comparison, all available baseline models are identical.…”
Section: Experimental Settingsmentioning
confidence: 99%
“…where š¼ denotes the identity matrix. Then the Hutchinson algorithm [4] can be utilized to compute the Hessian trace: 6), (7), and (8), respectively.…”
Section: Rank Selection Based On Sensitivitymentioning
confidence: 99%
“…Several popular techniques to decrease redundancy in neural network parameters include pruning [3,4], sparsification [5,6], quantization [7,8], and low-rank tensor decomposition [9,10]. Among these techniques, the tensor decomposition-based compression method is currently attracting growing interest.…”
Section: Introductionmentioning
confidence: 99%
“…To address these challenges, researchers and engineers have been exploring various methods for DNNs compression, including pruning (Tmamna et al, 2021; Liu et al, 2023), quantization (Bablani et al, 2023; Ma et al, 2023; Tmamna et al, 2023), and knowledge distillation (Yim et al, 2017; Zhao et al, 2020). Pruning involves removing unnecessary connections or parameters in the network, resulting in a smaller model size.…”
Section: Introductionmentioning
confidence: 99%