2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2020
DOI: 10.1109/cvpr42600.2020.00197
|View full text |Cite
|
Sign up to set email alerts
|

Discrete Model Compression With Resource Constraint for Deep Neural Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
18
0
1

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
4
2

Relationship

0
10

Authors

Journals

citations
Cited by 57 publications
(26 citation statements)
references
References 14 publications
0
18
0
1
Order By: Relevance
“…We compare PSS-Net with some compact models including channel pruning and Network Architecture Search (NAS) methods. Channel pruning methods include PFS (Wang et al 2020), MetaPruning , MDP (Liu et al 2020), AMC (He et al 2018), NetAdapt (Yang et al 2018), EagleEye , AutoSlim (Yu and Huang 2019a), DPFPS (Ruan et al 2021), CC (Li et al 2021), DMCP ), CPLI (Guo, Ouyang, and Xu 2020a) and DMC (Gao et al 2020). NAS methods include DARTS (Liu, Simonyan, and Yang 2019), PNAS (Liu et al 2018), NASNet (Zoph et al 2018) and ChamNet (Dai et al 2019).…”
Section: Resultsmentioning
confidence: 99%
“…We compare PSS-Net with some compact models including channel pruning and Network Architecture Search (NAS) methods. Channel pruning methods include PFS (Wang et al 2020), MetaPruning , MDP (Liu et al 2020), AMC (He et al 2018), NetAdapt (Yang et al 2018), EagleEye , AutoSlim (Yu and Huang 2019a), DPFPS (Ruan et al 2021), CC (Li et al 2021), DMCP ), CPLI (Guo, Ouyang, and Xu 2020a) and DMC (Gao et al 2020). NAS methods include DARTS (Liu, Simonyan, and Yang 2019), PNAS (Liu et al 2018), NASNet (Zoph et al 2018) and ChamNet (Dai et al 2019).…”
Section: Resultsmentioning
confidence: 99%
“…Particularly, we found that the implementation of previous pruning algorithms have many notable differences in their retraining step: some employed a small value of learning rate (e.g. 0.001 on ImageNet) to fine-tune the network (Molchanov et al, 2016;Han et al, 2015) for a small number of epochs, e.g., 20 epochs in the work by ; some used a larger value of learning rate (0.01) with much longer retraining budgets, e.g., 60, 100 and 120 epochs respectively on ImageNet (Zhuang et al, 2018;Gao et al, 2020;Li et al, 2020) Figure 1: Learning rate with different schedules on CIFAR when retraining for 72 epochs. In (a), the learning rate is fixed to the last learning rate of original training (i.e.…”
Section: Preliminary and Methodologymentioning
confidence: 99%
“…  = (0.1, 0.9) 部署两组实验, 模型压缩后对应的 精度分别为 70.55%, 70.53%, 显然 1 2 ( , ) [27] 、 Li et al [7] 、 ThiNet [6] 、 Slimming [8] 、 NRE [28] 、 SSS [29] 、 DCP [13] 、 SFP [30] 、 ASFP [31] 、 FPGM [32] 、 AMC [33] 、 COP [34] 、 KSE [35] 、 GAL [15] 、 HRank [36] 、 DHP [37] 、 SFS&DFS [38] 、LFPC [39] 和 DMC [40] . 参考文献…”
Section: 实验设置unclassified