2022 IEEE International Conference on Image Processing (ICIP) 2022
DOI: 10.1109/icip46576.2022.9898051
|View full text |Cite
|
Sign up to set email alerts
|

Which Metrics for Network Pruning: Final Accuracy? Or Accuracy Drop?

Abstract: Network pruning enables the utilization of deep neural networks in low-resource environments by removing redundant elements in a pre-trained network. To appraise each pruning method, two evaluation metrics are generally adopted, i.e., final accuracy and accuracy drop. Final accuracy represents the ultimate performance of the pruned sub-network after the pruning completes. On the other hand, accuracy drop, a more traditional way, measures the accuracy difference between the baseline model and the final pruned m… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(5 citation statements)
references
References 15 publications
0
5
0
Order By: Relevance
“…To address this problem, one possible solution is to adopt more informative metrics (e.g., based on validation loss) as the indicator to the model accuracy. Although some existing schemes [37,51] provided methods to approximately calculate such validation accuracy and loss, they are too computationally expensive for runtime use. Exploring more computationally efficient solutions will be our future work.…”
Section: Discussionmentioning
confidence: 99%
“…To address this problem, one possible solution is to adopt more informative metrics (e.g., based on validation loss) as the indicator to the model accuracy. Although some existing schemes [37,51] provided methods to approximately calculate such validation accuracy and loss, they are too computationally expensive for runtime use. Exploring more computationally efficient solutions will be our future work.…”
Section: Discussionmentioning
confidence: 99%
“…On the other hand, performance can be evaluated in terms of accuracy (or a similar metric, such as the F1-score) or loss (such as cross-entropy). The difference between the original model and the pruned model is then calculated (e.g., accuracy drop [11]), and, usually, the model with the smallest drop in accuracy and the highest number of FLOPs and/or the lowest number of parameters is selected.…”
Section: Accuracy In Image Classification Modelsmentioning
confidence: 99%
“…Accuracy (i.e., final accuracy, accuracy drop) and other similar metrics, such as the F1-score, are often used to evaluate the performance. The model size is measured as a function of the number of parameters, while the computational cost is determined by the number of FLOPs (floating point operations) [11][12][13][14]. In this way, the reduction results of the pruned model with respect to the unpruned model can be quantified, taking into account that the smaller the reduction in accuracy (or the smaller the increase in error) and the greater the reduction in both parameters and FLOPs, the better the pruned model.…”
Section: Introductionmentioning
confidence: 99%
“…Given the target pruning rate, average top-1 accuracy along with standard deviation from three experiments on the CIFAR-10 dataset is used to evaluate the capability of the pruning. As suggested by Joo et al [46], we used both final accuracy and accuracy drop w.r.t. baseline accuracy for performance evaluation.…”
Section: Experimental Settingsmentioning
confidence: 99%
“…baseline accuracy for performance evaluation. In addition, we used average from scratches [46] with distinct baselines rather than using a single baseline while averaging the results from different experiments. Again, on the ImageNet dataset, both top-1 and top-5 accuracy are used to evaluate the performance.…”
Section: Experimental Settingsmentioning
confidence: 99%