2021
DOI: 10.1145/3461699
|View full text |Cite
|
Sign up to set email alerts
|

Low-precision Logarithmic Number Systems

Abstract: Logarithmic number systems (LNS) are used to represent real numbers in many applications using a constant base raised to a fixed-point exponent making its distribution exponential. This greatly simplifies hardware multiply, divide, and square root. LNS with base-2 is most common, but in this article, we show that for low-precision LNS the choice of base has a significant impact. We make four main contributions. First, LNS is not closed under addition and subtraction, so the result is approximate. We … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
5

Citation Types

0
13
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 9 publications
(13 citation statements)
references
References 37 publications
0
13
0
Order By: Relevance
“…However, most commercial computing systems including off-the-shelf processors support only IEEE 754 standard FP formats, i.e., double-precision (DP) with 64-bits length and single-precision (SP) with 32-bits length [40]. Therefore, FP numbers cannot use less than 32-bits in such systems, which restricts the reduction of data movement between memories and registers in applications where low-precision can be used such as machine learning (ML) algorithms [1,28,33,[36][37][38]. The SP is currently the dominant data type for deep learning training systems, and even recent studies have shown that low-precision FP formats such as 8-and 16-bits data types can be used [33,[36][37][38].…”
Section: Introductionmentioning
confidence: 99%
See 4 more Smart Citations
“…However, most commercial computing systems including off-the-shelf processors support only IEEE 754 standard FP formats, i.e., double-precision (DP) with 64-bits length and single-precision (SP) with 32-bits length [40]. Therefore, FP numbers cannot use less than 32-bits in such systems, which restricts the reduction of data movement between memories and registers in applications where low-precision can be used such as machine learning (ML) algorithms [1,28,33,[36][37][38]. The SP is currently the dominant data type for deep learning training systems, and even recent studies have shown that low-precision FP formats such as 8-and 16-bits data types can be used [33,[36][37][38].…”
Section: Introductionmentioning
confidence: 99%
“…There are many works in the areas of signal processing [23], video processing [4], and neural networks [28] that have studied logarithmic number system (LNS) as an alternative FP number system for embedded applications [1]. LNS converts FP numbers to fixed-point numbers, which can be represented as an integer in off-the-shelf CPUs.…”
Section: Introductionmentioning
confidence: 99%
See 3 more Smart Citations