2019 IEEE 30th International Conference on Application-Specific Systems, Architectures and Processors (ASAP) 2019
DOI: 10.1109/asap.2019.00-30
|View full text |Cite
|
Sign up to set email alerts
|

Sparstition: A Partitioning Scheme for Large-Scale Sparse Matrix Vector Multiplication on FPGA

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
4
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
2
1

Relationship

1
7

Authors

Journals

citations
Cited by 12 publications
(5 citation statements)
references
References 14 publications
0
4
0
Order By: Relevance
“…COO format is applied to [12] and [14], which has an intuitive format to utilize. CSR format is selected in this work and [13] to reduce memory size compared to COO format for common datasets which are not hyper-sparse. The SpMV accelerator in [14] selects 20-bit to 26-bit fixed-point precision for PageRank acceleration, while working on the newly proposed FP16 to FP32 transprecision scheme has shown higher accuracy.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…COO format is applied to [12] and [14], which has an intuitive format to utilize. CSR format is selected in this work and [13] to reduce memory size compared to COO format for common datasets which are not hyper-sparse. The SpMV accelerator in [14] selects 20-bit to 26-bit fixed-point precision for PageRank acceleration, while working on the newly proposed FP16 to FP32 transprecision scheme has shown higher accuracy.…”
Section: Resultsmentioning
confidence: 99%
“…To reduce the memory overhead, [12] divided the SpMV algorithm into two steps and used parallel floating-point multipliers, focusing on high-performance computing for large and hyper-sparse datasets. Matrix partitioning scheme was also proposed to accelerate SpMV operation [13]. Partitioning of the sparse matrix using Compressed Sparse Row (CSR) format enabled parallel computation.…”
Section: Introductionmentioning
confidence: 99%
“…To increase the scalability of our design and allow it to work on larger matrices, we will use sparstitioning [16]. Sparstitioning is a method for splitting both the matrix and the vector of a matrix-vector operation into partitions, so that no unnecessary data is read during the running of the operation on each partition.…”
Section: E Sparstitioningmentioning
confidence: 99%
“…To increase the scalability of our design and allow it to work on larger matrices, we will use sparstitioning [27]. Sparstitioning is a method for splitting both the matrix and the vector of a matrixvector operation into partitions so that no unnecessary data is read during the running of the operation on each partition.…”
Section: Sparstitioningmentioning
confidence: 99%