The platform will undergo maintenance on Sep 14 at about 7:45 AM EST and will be unavailable for approximately 2 hours.
2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2021
DOI: 10.1109/cvpr46437.2021.00156
|View full text |Cite
|
Sign up to set email alerts
|

Zero-shot Adversarial Quantization

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
14
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 50 publications
(21 citation statements)
references
References 21 publications
0
14
0
Order By: Relevance
“…Note that nwma means n-bit quantization for weights and m-bit quantization for activations. As baselines, we selected ZeroQ [3], ZAQ [7], and GDFQ [5] as the important previous works on generative data-free quantization. We report top-1 accuracy for each experiment.…”
Section: Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…Note that nwma means n-bit quantization for weights and m-bit quantization for activations. As baselines, we selected ZeroQ [3], ZAQ [7], and GDFQ [5] as the important previous works on generative data-free quantization. We report top-1 accuracy for each experiment.…”
Section: Resultsmentioning
confidence: 99%
“…DSG [4] and IntraQ [6] focus on increasing the diversity of data impressions to expand the coverage space of synthetic samples. ZAQ [7] mainly works by exploring and transferring information of individual samples and their correlations, mitigating the gap between the full-precision and quantized models. Qimera [8] employs superposed latent embeddings to create boundary supporting samples, which is ignored in conventional methods.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…[62] use weight (only) quantization for medical image segmentation as an attempt to remove noise and not for computational efficiency. The recent work of [39] shows both a sophisticated post-training quantization scheme and includes fine-tuned semantic segmentation results. Again a significant degradation is observed when going from 6 to 4 bits.…”
Section: Related Workmentioning
confidence: 99%
“…Some researchers obtained synthetic samples that resemble the distribution of the authentic sample by using information from the pre-trained full-precision network, such as batchnormalization (BN) statistics. These approaches can be categorized into noise-optimized data-free quantization [8,[12][13][14] and generative data-free quantization [9,[15][16][17]. The former initializes a sample that satisfies the gaussian distribution, and the dimension of the sample is consistent with the size of a real sample.…”
Section: Introductionmentioning
confidence: 99%