2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2021
DOI: 10.1109/cvpr46437.2021.01540
|View full text |Cite
|
Sign up to set email alerts
|

Diversifying Sample Generation for Accurate Data-Free Quantization

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
34
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
3

Relationship

2
6

Authors

Journals

citations
Cited by 59 publications
(35 citation statements)
references
References 27 publications
0
34
0
Order By: Relevance
“…In cases such as binary or ternary quantization, proposed method would greatly benefit from fine-tuning. Generated data, obtained with similar methods as [41,40], may provide better insight on inputs and activation distributions, which could in theory improve the scales estimation for input quantization.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…In cases such as binary or ternary quantization, proposed method would greatly benefit from fine-tuning. Generated data, obtained with similar methods as [41,40], may provide better insight on inputs and activation distributions, which could in theory improve the scales estimation for input quantization.…”
Section: Discussionmentioning
confidence: 99%
“…The standard method for input quantization, introduced in [30] and latter used in [8,41], is static as its parameters are computed once and for all during quantization then fixed during inference. This provides the best inference speed but for at the cost of a lower accuracy due to coarse modelization of the input ranges.…”
Section: Input Quantizationmentioning
confidence: 99%
“…To mitigate the accuracy degradation, almost all quantization methods require access to the original dataset for re-training/fine-tuning the model parameters [7,11,16,29,37]. Unfortunately, in scenarios involving sensitive data (e.g., medical and bio-metric data), these methods are no longer applicable due to the unavailability of the original dataset [34,38]. Therefore, data-free quantization is regarded as a potential and practice scheme and thus has recently been widely investigated [3,39].…”
Section: Introductionmentioning
confidence: 99%
“…The key issue is how to generate effective and meaningful samples to ensure the calibration accuracy. A notable line of research proposes batch normalization (BN) regularization [3,38], which states that the statistics (i.e., the mean and standard deviation) encoded in the BN layers can represent the distribution of original training data. These methods, however, are only applicable to convolutional neural networks (CNNs) and not to vision transformers, because the latter employs layer normalization (LN), which does not store any previous information like BN.…”
Section: Introductionmentioning
confidence: 99%
“…Traditional standard CNN-based models contain millions of parameters [1,2,3,4,5], causing problems in many practical works, e.g., developing real-time interactive vision applications for portable devices, transferring knowledge from existing models to novel categories without re-training, etc. On the contrary, learning novel concepts from few examples is an interesting and challenging task with many practical advantages.…”
Section: Introductionmentioning
confidence: 99%