2021
DOI: 10.48550/arxiv.2112.11438
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Mixed Precision Low-bit Quantization of Neural Network Language Models for Speech Recognition

Junhao Xu,
Jianwei Yu,
Shoukang Hu
et al.

Abstract: State-of-the-art language models (LMs) represented by long-short term memory recurrent neural networks (LSTM-RNNs) and Transformers are becoming increasingly complex and expensive for practical applications. Low-bit neural network quantization provides a powerful solution to dramatically reduce their model size. Current quantization methods are based on uniform precision and fail to account for the varying performance sensitivity at different parts of LMs to quantization errors. To this end, novel mixed precis… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Publication Types

Select...

Relationship

0
0

Authors

Journals

citations
Cited by 0 publications
references
References 51 publications
(99 reference statements)
0
0
0
Order By: Relevance

No citations

Set email alert for when this publication receives citations?