2020
DOI: 10.1109/tcad.2018.2883902
|View full text |Cite
|
Sign up to set email alerts
|

FlexFloat: A Software Library for Transprecision Computing

Abstract: In recent years approximate computing has been extensively explored as a paradigm to design hardware and software solutions that save energy by trading off on the quality of the computed results. In applications that involve numerical computations with wide dynamic range, precision tuning of floating-point (FP) variables is a key knob to leverage the energy/quality trade-off of program results. This aspect assumes maximum relevance in the transprecision computing scenario, where accuracy of data is tuned at fi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
24
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
3
3
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 27 publications
(24 citation statements)
references
References 51 publications
0
24
0
Order By: Relevance
“…This topic is also the core of the automotive stream in the H2020 European Processor Initiative (embedded HPC for autonomous driving with BMW as main technology end-user [9,10]) funding this work. To address the above issues new computing arithmetic styles are appearing in research [11][12][13][14][15][16][17][18][19][20] overcoming the classic fixed-point (INT) vs. IEEE-754 floating-point duality in case of embedded DNN (Deep Neural Networks) signal processing. Just as an example, Intel is proposing BFLOAT16 (Brain Floating Point), that has same number of exponent bits of the single-precision floating point allowing in this way to replace binary32 in practical uses although with less precision.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…This topic is also the core of the automotive stream in the H2020 European Processor Initiative (embedded HPC for autonomous driving with BMW as main technology end-user [9,10]) funding this work. To address the above issues new computing arithmetic styles are appearing in research [11][12][13][14][15][16][17][18][19][20] overcoming the classic fixed-point (INT) vs. IEEE-754 floating-point duality in case of embedded DNN (Deep Neural Networks) signal processing. Just as an example, Intel is proposing BFLOAT16 (Brain Floating Point), that has same number of exponent bits of the single-precision floating point allowing in this way to replace binary32 in practical uses although with less precision.…”
Section: Introductionmentioning
confidence: 99%
“…The Tesla FSD chip exploits a neural processing units using 8-bit by 8-bit integer multiply and a 32-bit integer addition. Transprecision computing for DNN is also proposed in state of art by academia [14] and industry, e.g. IBM and Greenwaves in [15].…”
Section: Introductionmentioning
confidence: 99%
“…The Flexfloat C++ library of Tagliavini et al, [23], offers alternative FP formats with variable bit-width mantissa and exponents. They demonstrate Flexfloat is up to 2.8× and 2.4× faster than MPFR and SoftFloat, respectively for various benchmarks.…”
Section: Background and Motivationmentioning
confidence: 99%
“…To address custom-fp in central processing units (CPUs), FP simulators, such as the Flexfloat [23], and Berkeley's SoftFP [13], are available. These simulators support arbitrary or custom range and precision FP such as 16-, 32-, 64-, 80-and 128-bit with corresponding fixed-width mantissa and exponents respectively.…”
Section: Introductionmentioning
confidence: 99%
“…The Tesla FSD chip exploits a neural processing units using 8-bit by 8-bit integer multiply and a 32-bit integer addition. Transprecision computing for DNN is also proposed in state of art by academia [20] and industry, e.g. IBM and Greenwaves in [21].…”
Section: Introductionmentioning
confidence: 99%