2022
DOI: 10.48550/arxiv.2210.16621
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Empirical Evaluation of Post-Training Quantization Methods for Language Tasks

Abstract: Transformer-based architectures like BERT have achieved great success in a wide range of Natural Language tasks. Despite their decent performance, the models still have numerous parameters and high computational complexity, impeding their deployment in resource-constrained environments.Post-Training Quantization (PTQ), which enables low-bit computations without extra training, could be a promising tool. In this work, we conduct an empirical evaluation of three PTQ methods on BERT-Base and BERT-Large: Linear Qu… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Publication Types

Select...

Relationship

0
0

Authors

Journals

citations
Cited by 0 publications
references
References 20 publications
(35 reference statements)
0
0
0
Order By: Relevance

No citations

Set email alert for when this publication receives citations?