Third International Conference on Computer Vision and Data Mining (ICCVDM 2022) 2023
DOI: 10.1117/12.2660087
|View full text |Cite
|
Sign up to set email alerts
|

Model compression method combining multi-factor channel pruning and knowledge distillation

Abstract: Deploying deep learning models in embedded terminals is essential for applications with real-time reasoning requirements. In order to make the model run efficiently in the embedded end with limited resources, we propose a model compression method combining multi-factor channel pruning and knowledge distillation. In the process of network sparsity, this method uses the double factors of the BN layer to improve the pruning standard and guides the local pruning of the model according to the new standard to ensure… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Publication Types

Select...

Relationship

0
0

Authors

Journals

citations
Cited by 0 publications
references
References 23 publications
(30 reference statements)
0
0
0
Order By: Relevance

No citations

Set email alert for when this publication receives citations?