2019
DOI: 10.1002/cpe.5418
|View full text |Cite
|
Sign up to set email alerts
|

A customized precision format based on mantissa segmentation for accelerating sparse linear algebra

Abstract: In this work, we pursue the idea of radically decoupling the floating point format used for arithmetic operations from the format used to store the data in memory. We complement this idea with a customized precision memory format derived by splitting the mantissa (significand) of standard IEEE formats into segments, such that values can be accessed faster if lower accuracy is acceptable. Combined with precision-aware algorithms that dynamically adapt the data access accuracy to the numerical requirements, the … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
17
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
3
3

Relationship

1
5

Authors

Journals

citations
Cited by 11 publications
(17 citation statements)
references
References 12 publications
0
17
0
Order By: Relevance
“…An important aspect in this context is the design of a “ memory accessor ” that converts data on the fly between the IEEE high-precision arithmetic format and the memory/communication format (Figure 11). The memory/communication format does not necessarily have to be part of the IEEE standard but can also be an arbitrary composition of sign, exponent, and significand bits (Grützmacher et al, 2019) or even nonstandard formats like Gustafson’s Posits (Unum type III, Gustafson, 2015). On an abstract level, the idea is to compress data before and after memory operations and only use the working precision in the arithmetic operations.…”
Section: Sparse Linear Algebramentioning
confidence: 99%
“…An important aspect in this context is the design of a “ memory accessor ” that converts data on the fly between the IEEE high-precision arithmetic format and the memory/communication format (Figure 11). The memory/communication format does not necessarily have to be part of the IEEE standard but can also be an arbitrary composition of sign, exponent, and significand bits (Grützmacher et al, 2019) or even nonstandard formats like Gustafson’s Posits (Unum type III, Gustafson, 2015). On an abstract level, the idea is to compress data before and after memory operations and only use the working precision in the arithmetic operations.…”
Section: Sparse Linear Algebramentioning
confidence: 99%
“…According to the IEEE-754 standard, a positive normal double-precision floating-point number is a binary floatingpoint number where the 53-bit integer m (the significand) is in the interval [2 52 , 2 53 ) while being interpreted as a number in [1,2) by virtually dividing it by 2 52 , and where the 11-bit exponent p ranges from −1022 to 1023 [7]. Such a double-precision number can represent all values between 2 −1022 and up to but not including 2 1024 ; these are the positive normal values.…”
Section: Ieee-754 Binary Floating-point Numbersmentioning
confidence: 99%
“…The single-precision floating-point numbers are similar but span 32 bits (binary32). They are binary floating-point numbers where the 24-bit significand m is in the interval [2 23 , 2 24 )-considered as value in [1,2) after virtually dividing name exponent bits significand (stored) decimal digits (exact) binary64 11 bits 53 bits (52 bits) 15 (17) binary32 8 bits 24 bits (23 bits) 6 (9) TA B L E 2 Common IEEE-754 binary floating-point numbers: 64 bits (binary64) and 32 bits (binary32). A single bit is reserved for the sign in all cases.…”
Section: Ieee-754 Binary Floating-point Numbersmentioning
confidence: 99%
See 2 more Smart Citations