2020 53rd Annual IEEE/ACM International Symposium on Microarchitecture (MICRO) 2020
DOI: 10.1109/micro50266.2020.00071
|View full text |Cite
|
Sign up to set email alerts
|

GOBO: Quantizing Attention-Based NLP Models for Low Latency and Energy Efficient Inference

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
30
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 75 publications
(39 citation statements)
references
References 19 publications
0
30
0
Order By: Relevance
“…In Figure 1 and Table 8, we compare the proposed TernaryBERT with (i) Other Quantization Methods: including mixed-precision Q-BERT (Shen et al, 2020), post-training quantization GOBO (Zadeh and Moshovos, 2020), as well as Quant-Noise which uses product quantization (Fan et al, 2020); and (ii) Other Compression Methods: including weight-sharing method ALBERT (Lan et al, 2019), pruning method LayerDrop (Fan et al, 2019), distillation methods DistilBERT and TinyBERT (Sanh et al, 2019;Jiao et al, 2019). The result of Distil-BERT is taken from (Jiao et al, 2019).…”
Section: Comparison With Other Methodsmentioning
confidence: 99%
See 3 more Smart Citations
“…In Figure 1 and Table 8, we compare the proposed TernaryBERT with (i) Other Quantization Methods: including mixed-precision Q-BERT (Shen et al, 2020), post-training quantization GOBO (Zadeh and Moshovos, 2020), as well as Quant-Noise which uses product quantization (Fan et al, 2020); and (ii) Other Compression Methods: including weight-sharing method ALBERT (Lan et al, 2019), pruning method LayerDrop (Fan et al, 2019), distillation methods DistilBERT and TinyBERT (Sanh et al, 2019;Jiao et al, 2019). The result of Distil-BERT is taken from (Jiao et al, 2019).…”
Section: Comparison With Other Methodsmentioning
confidence: 99%
“…On Transformer-based models, 8-bit fixedpoint quantization is successfully applied in fullyquantized Transformer (Prato et al, 2019) and Q8BERT (Zafrir et al, 2019). The use of lower bits is also investigated in (Shen et al, 2020;Fan et al, 2020;Zadeh and Moshovos, 2020). Specifically, In Q-BERT (Shen et al, 2020) and GOBO (Zadeh and Moshovos, 2020), mixed-precision with 3 or more bits are used to avoid severe accuracy drop.…”
Section: Quantizationmentioning
confidence: 99%
See 2 more Smart Citations
“…However, this method collects information from surrounding pixels and cannot generate dense contextual information. In recent years, attention modules have been successful in fields such as natural language processing [7][8][9][10][11], speech recognition [12,13], image inpainting [14][15][16] and image recognition [17][18][19]. The self-attention layer of work [8,11] This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited.…”
Section: Introductionmentioning
confidence: 99%