Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence 2018
DOI: 10.24963/ijcai.2018/309
|View full text |Cite
|
Sign up to set email alerts
|

Soft Filter Pruning for Accelerating Deep Convolutional Neural Networks

Abstract: Existing methods usually utilize pre-defined criterions, such as p -norm, to prune unimportant filters. There are two major limitations in these methods. First, the relations of the filters are largely ignored. The filters usually work jointly to make an accurate prediction in a collaborative way. Similar filters will have equivalent effects on the network prediction, and the redundant filters can be further pruned. Second, the pruning criterion remains unchanged during training. As the network updated at each… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

2
503
0
2

Year Published

2019
2019
2022
2022

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 728 publications
(550 citation statements)
references
References 28 publications
2
503
0
2
Order By: Relevance
“…We compare our results to those reported by Li's method [26] [37] and by Soft Filter [17] for ResNet in Table 2 and those reported by Zhang's method [18] for ResNetXt in Table 4. As shown, ResNet models trained using the proposed method achieve up to 0.31% accuracy improvement with a pruning ratio up to 65.7%-79.5% when compared to the baseline.…”
Section: Comparison To Existing Approachesmentioning
confidence: 84%
“…We compare our results to those reported by Li's method [26] [37] and by Soft Filter [17] for ResNet in Table 2 and those reported by Zhang's method [18] for ResNetXt in Table 4. As shown, ResNet models trained using the proposed method achieve up to 0.31% accuracy improvement with a pruning ratio up to 65.7%-79.5% when compared to the baseline.…”
Section: Comparison To Existing Approachesmentioning
confidence: 84%
“…FLOPs↓ Top-1 Acc↓ Top-5 Acc↓ LcP [3] 25.00% -0.09% -0.19% NISP [36] 27.31% 0.21% -SSS [18] 31.08% 1.94% 0.95% ThiNet [26] 36.79% 0.84% 0.47% OICSR-GL 37.30% -0.22% -0.16% He et al [11] 41.80% 1.54% 0.81% GDP [23] 42.00% 2.52% 1.25% LcP [3] 42.00% 0.85% 0.26% NISP [36] 44.41% 0.89% -OICSR-GL 44.43% 0.01% 0.08% He et al [13] 50.00% -1.40% LcP [3] 50.00% 0.96% 0.42% OICSR-GL 50.00% 0.37% 0.34% (c) ResNet-50 on ImageNet-1K Table 2: Comparison with existing methods. FLOPs↓ and Params↓ denote the reduction of FLOPs and parameters.…”
Section: Methodsmentioning
confidence: 99%
“…Chin et al [3] considered channel pruning as a global ranking problem and compensated the layerwise approximation error that improved the performance for various heuristic metrics. To reduce accuracy loss caused by incorrect channel pruning, redundant channels were pruned in a dynamic way in [11,23]. Furthermore, Huang et al [17] and Huang & Wang [18] trained pruning agents and removed redundant structure in a data-driven way.…”
Section: Related Workmentioning
confidence: 99%
“…Liu et al impose channel sparsity by imposing 1 regularization on the scaling factors in batch normalization. In [11], He et al propose a soft filter pruning method which allows the pruned filters to be updated during the training procedure.…”
Section: Related Workmentioning
confidence: 99%