2020
DOI: 10.1145/3380934
|View full text |Cite
|
Sign up to set email alerts
|

Acceleration of PageRank with Customized Precision Based on Mantissa Segmentation

Abstract: We describe the application of a communication-reduction technique for the PageRank algorithm that dynamically adapts the precision of the data access to the numerical requirements of the algorithm as the iteration converges. Our variable-precision strategy, using a customized precision format based on mantissa segmentation (CPMS), abandons the IEEE 754 single- and double-precision number representation formats employed in the standard implementation of PageRank, and instead handles the data in memory using a … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
5

Citation Types

0
22
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
3
2
1

Relationship

1
5

Authors

Journals

citations
Cited by 11 publications
(24 citation statements)
references
References 12 publications
0
22
0
Order By: Relevance
“…Although these 16-bit data formats have been reported to be efficient for deep learning training, and [11] predicted the industry-wide adoption of BFloat16 format for many applications, narrow floating-point formats are highly sensitive to the application characteristics. For instance, [13] proposed a 16-bit format for PageRank that captures the top two bytes of FP64. This makes conversion to/from FP64 efficient.…”
Section: Introductionmentioning
confidence: 99%
See 4 more Smart Citations
“…Although these 16-bit data formats have been reported to be efficient for deep learning training, and [11] predicted the industry-wide adoption of BFloat16 format for many applications, narrow floating-point formats are highly sensitive to the application characteristics. For instance, [13] proposed a 16-bit format for PageRank that captures the top two bytes of FP64. This makes conversion to/from FP64 efficient.…”
Section: Introductionmentioning
confidence: 99%
“…Floating-point computation is fairly complex to emulate as bit-wise operations on a general-purpose instruction set [14]. It is more efficient to convert compact storage formats to a format for which arithmetic is supported by hardware, typically FP32 or FP64 [7,13]. Conversions include changes in the bit width of the exponent and rounding of the mantissa.…”
Section: Introductionmentioning
confidence: 99%
See 3 more Smart Citations