2019
DOI: 10.48550/arxiv.1910.01055
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

QuaRL: Quantization for Sustainable Reinforcement Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 0 publications
0
1
0
Order By: Relevance
“…Post-training quantization (PTQ) takes a trained full-precision model (32-bit floating point, fp32) and quantizes (to an 8-bit representation, int8) its weights to lower-precision values [ 19 ]. Uniform affine quantization is used to transform the fp32 values into int8 ones [ 29 ]. The process consists of first extracting the maximum and minimum values in the fp32 working range.…”
Section: Methodsmentioning
confidence: 99%
“…Post-training quantization (PTQ) takes a trained full-precision model (32-bit floating point, fp32) and quantizes (to an 8-bit representation, int8) its weights to lower-precision values [ 19 ]. Uniform affine quantization is used to transform the fp32 values into int8 ones [ 29 ]. The process consists of first extracting the maximum and minimum values in the fp32 working range.…”
Section: Methodsmentioning
confidence: 99%