2020 IEEE Winter Conference on Applications of Computer Vision (WACV) 2020
DOI: 10.1109/wacv45572.2020.9093331
|View full text |Cite
|
Sign up to set email alerts
|

Leveraging Filter Correlations for Deep Model Compression

Abstract: We present a filter correlation based model compression approach for deep convolutional neural networks. Our approach iteratively identifies pairs of filters with the largest pairwise correlations and drops one of the filters from each such pair. However, instead of discarding one of the filters from each such pair naïvely, the model is re-optimized to make the filters in these pairs maximally correlated, so that discarding one of the filters from the pair results in minimal information loss. Moreover, after d… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
42
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
4
1

Relationship

0
9

Authors

Journals

citations
Cited by 59 publications
(42 citation statements)
references
References 46 publications
0
42
0
Order By: Relevance
“…We found that our method observed little drop (even better) on top-5 accuracy compared to some prior work (Yu et al, 2018; Singh et al, 2020). We think the reason for the better pruning performance comes from the fact that the mapping to intermediate space relieves the correlation between neurons and pruning in this space could remove the useless neurons.…”
Section: Discussionmentioning
confidence: 49%
“…We found that our method observed little drop (even better) on top-5 accuracy compared to some prior work (Yu et al, 2018; Singh et al, 2020). We think the reason for the better pruning performance comes from the fact that the mapping to intermediate space relieves the correlation between neurons and pruning in this space could remove the useless neurons.…”
Section: Discussionmentioning
confidence: 49%
“…In addition, the work [30] introduces a discrimination-aware loss to keep channels that contribute to the discriminative power of neural networks. Some other methods propose to prune channels through optimizing the formulation of reconstruction error [20,31], reducing the similarity between features [32,33], and directly evaluating channels' significance [17,18]. Our algorithm is based on the evaluation of channel saliency as well.…”
Section: Related Work On Pruningmentioning
confidence: 99%
“…Refs. [30][31][32][33][34][35][36] prune filters that have a minimal contribution in the model. After removing these filters, the model is usually fine-tuned to maintain its performance.…”
Section: Model Compressionmentioning
confidence: 99%
“…Model compression is considered as another reliable and economic method to improve the efficiency of the convolutional neural network, which can be roughly divided into three categories: (a) Connection pruning [28,29]; (b) Filter pruning [30][31][32][33][34][35][36]; and (c) Quantization [28,[37][38][39]. These methods can effectively reduce the computation of the convolutional neural network, but this is always achieved at the price of sacrificing the accuracy.…”
Section: Introductionmentioning
confidence: 99%