2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) 2020
DOI: 10.1109/cvprw50498.2020.00363
|View full text |Cite
|
Sign up to set email alerts
|

Data-Free Network Quantization With Adversarial Knowledge Distillation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
86
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
3
2

Relationship

1
8

Authors

Journals

citations
Cited by 93 publications
(90 citation statements)
references
References 18 publications
0
86
0
Order By: Relevance
“…Unified Framework Combining the above mentioned techniques will lead to a unified inversion framework [Choi et al, 2020] for data-free knowledge distillation:…”
Section: Preliminarymentioning
confidence: 99%
See 1 more Smart Citation
“…Unified Framework Combining the above mentioned techniques will lead to a unified inversion framework [Choi et al, 2020] for data-free knowledge distillation:…”
Section: Preliminarymentioning
confidence: 99%
“…T. and S. refers to the scratch accuracy of teachers and students on the original training data. We compare our approach with the following baselines: DAFL , ZSKT [Micaelli and Storkey, 2019], ADI [Yin et al, 2020], DFQ [Choi et al, 2020] and LS-GDFD [Luo et al, 2020]. They all follow the unified framework discussed in Sec.…”
Section: Benchmarks On Knowledge Distillationmentioning
confidence: 99%
“…We show the results for data-free PTQ (DF-PTQ) and data-free QAT (DF-QAT), both with ZS-CGAN. We compare our method with the existing DF-Q schemes in [11][12][13][14], and bold numbers indicate the best results among DF-Q schemes. For data-dependent quantization (DD-Q), we show the results for PTQ and QAT by using the original ImageNet dataset [8].…”
Section: Methodsmentioning
confidence: 99%
“…Secondly, additional re-training epochs are often needed in order to recover from the accuracy drop coused by quantization. To notice that re-training is time-consuming and the original training data are not always available for privacy or security reasons [31,32]. Recent trends suggest the use of alternative floating-point formats rather than integers, which require no extra fine-tuning steps, such as Bfloats [10] or reduced precision floating-point [33,34].…”
Section: Convnets Approximation Via Arithmetic Approximation and Data-reusementioning
confidence: 99%