The platform will undergo maintenance on Sep 14 at about 7:45 AM EST and will be unavailable for approximately 2 hours.
2017
DOI: 10.1145/3017994
|View full text |Cite
|
Sign up to set email alerts
|

Sparse Matrix-Vector Multiplication on GPGPUs

Abstract: The multiplication of a sparse matrix by a dense vector (SpMV) is a centerpiece of scientific computing applications: it is the essential kernel for the solution of sparse linear systems and sparse eigenvalue problems by iterative methods. The efficient implementation of the sparse matrixvector multiplication is therefore crucial and has been the subject of an immense amount of research, with interest renewed with every major new trend in high performance computing architectures. The introduction of General Pu… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
66
0
2

Year Published

2018
2018
2021
2021

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 103 publications
(77 citation statements)
references
References 98 publications
0
66
0
2
Order By: Relevance
“…A GPU dose engine is adopted to calculate the dose contribution matrix φi,j and then the matrix is converted to the most memory efficient sparse matrix format, that is compressed sparse row (CSR) format . The CSR format uses three arrays to store the nonzero elements, corresponding column indices and compressed row offsets which indicate the boundary of each row.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…A GPU dose engine is adopted to calculate the dose contribution matrix φi,j and then the matrix is converted to the most memory efficient sparse matrix format, that is compressed sparse row (CSR) format . The CSR format uses three arrays to store the nonzero elements, corresponding column indices and compressed row offsets which indicate the boundary of each row.…”
Section: Methodsmentioning
confidence: 99%
“…format. 31 The CSR format uses three arrays to store the nonzero 3) did not support GPU acceleration. Table 2 lists the run times of the three cases (the times of dose calculation are not included).…”
Section: Heterogeneous Platform With Cpu and Gpumentioning
confidence: 99%
“…We distinguish between multiplying a sparsematrix by a dense-vector (SpMV) and by a sparse-vector (SpMSpV). There is extensive literature focusing on SpMV for GPUs (including a comprehensive survey [22]). However, we concentrate on SpMSpV, because it is more relevant to graph search algorithms where the vector represents the subset of vertices that are currently active and is typically sparse.…”
Section: Two Roads To Matrix-vector Multiplicationmentioning
confidence: 99%
“…For this reason, optimizing SpMV has been extensively studied by many researchers (see, eg, other works). Two recent works provide a detailed survey …”
Section: Related Workmentioning
confidence: 99%
“…However, it is known that SpMV's performance falls well behind the capacity of modern computers . Hence, optimization of SpMV has been extensively studied (see the works of Langr and Tvrdik, and Filippone et al for comprehensive surveys).…”
Section: Introductionmentioning
confidence: 99%