2018 Design, Automation &Amp; Test in Europe Conference &Amp; Exhibition (DATE) 2018
DOI: 10.23919/date.2018.8342176
|View full text |Cite
|
Sign up to set email alerts
|

The transprecision computing paradigm: Concept, design, and applications

Abstract: Guaranteed numerical precision of each elementary step in a complex computation has been the mainstay of traditional computing systems for many years. This era, fueled by Moore's law and the constant exponential improvement in computing efficiency, is at its twilight: from tiny nodes of the Internet-of-Things, to large HPC computing centers, sub-picoJoule/operation energy efficiency is essential for practical realizations. To overcome the power wall, a shift from traditional computing paradigms is now mandator… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
37
0
1

Year Published

2019
2019
2020
2020

Publication Types

Select...
5
2
2

Relationship

1
8

Authors

Journals

citations
Cited by 61 publications
(38 citation statements)
references
References 12 publications
(8 reference statements)
0
37
0
1
Order By: Relevance
“…This topic is also the core of the automotive stream in the H2020 European Processor Initiative (embedded HPC for autonomous driving with BMW as main technology end-user [9,10]) funding this work. To address the above issues new computing arithmetic styles are appearing in research [11][12][13][14][15][16][17][18][19][20] overcoming the classic fixed-point (INT) vs. IEEE-754 floating-point duality in case of embedded DNN (Deep Neural Networks) signal processing. Just as an example, Intel is proposing BFLOAT16 (Brain Floating Point), that has same number of exponent bits of the single-precision floating point allowing in this way to replace binary32 in practical uses although with less precision.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…This topic is also the core of the automotive stream in the H2020 European Processor Initiative (embedded HPC for autonomous driving with BMW as main technology end-user [9,10]) funding this work. To address the above issues new computing arithmetic styles are appearing in research [11][12][13][14][15][16][17][18][19][20] overcoming the classic fixed-point (INT) vs. IEEE-754 floating-point duality in case of embedded DNN (Deep Neural Networks) signal processing. Just as an example, Intel is proposing BFLOAT16 (Brain Floating Point), that has same number of exponent bits of the single-precision floating point allowing in this way to replace binary32 in practical uses although with less precision.…”
Section: Introductionmentioning
confidence: 99%
“…Transprecision computing for DNN is also proposed in state of art by academia [14] and industry, e.g. IBM and Greenwaves in [15]. Signal processing sparsity has been exploited recently [16,17] to achieve a compression of ML complexity to reach real-time computing on edge devices.…”
Section: Introductionmentioning
confidence: 99%
“…Approximation techniques. During the last years, the most promising approaches to achieve performance gains are represented by the trade-off with application accuracy [30], [31]. Algorithm-level approximate computing techniques are well-known in literature [32], [33], [3], and represent also an important challenge for in HPC applica-tions [34], [35].…”
Section: Related Workmentioning
confidence: 99%
“…Great examples of compact formats are Brain Floats (BFLOAT) and Flexpoint [4,5] that consist in an optimized version of the 16-bit standard floating point number IEEE 754) used by Google for their TPU (tensor processing unit) engines. Other formats also come from the concept of transprecision computing [6,7] (NVIDIA Turing architectures allow computation with 4-, 8-, and 32-bit integers and with 16-and 32-bit floats). The up-and-coming Posit format has been theoretically [8][9][10] and practically [11] proven to be a perfect replacement for IEEE float numbers when applied to DNNs in terms of efficiency and accuracy.…”
Section: Introductionmentioning
confidence: 99%