2012 IEEE Seventh International Conference on Networking, Architecture, and Storage 2012
DOI: 10.1109/nas.2012.11
|View full text |Cite
|
Sign up to set email alerts
|

Parallel Sparse Matrix Multiplication for Preconditioning and SSTA on a Many-Core Architecture

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2012
2012
2016
2016

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(4 citation statements)
references
References 20 publications
0
4
0
Order By: Relevance
“…Several approaches have been developed for representing sparse matrices aiming at improving the efficiency of memory usage and the computation of arithmetic operations. In [60] the focus was on reducing the number of accesses for particular matrix operations and defining a more natural mapping between the indices in the physical value matrix and the logical sparse coefficient matrix. In [20], the authors proposed an approach to customise the matrix representation according to the specific sparseness characteristics of matrices and the target machine by performing register and cache level optimisations.…”
Section: Advances In Sparse Matricesmentioning
confidence: 99%
See 1 more Smart Citation
“…Several approaches have been developed for representing sparse matrices aiming at improving the efficiency of memory usage and the computation of arithmetic operations. In [60] the focus was on reducing the number of accesses for particular matrix operations and defining a more natural mapping between the indices in the physical value matrix and the logical sparse coefficient matrix. In [20], the authors proposed an approach to customise the matrix representation according to the specific sparseness characteristics of matrices and the target machine by performing register and cache level optimisations.…”
Section: Advances In Sparse Matricesmentioning
confidence: 99%
“…Thus, the development and advances on computer hardware have thoroughly influenced the development of linear algebra algorithms. In this context, the parallel processing of matrix operations in distributed memory architectures arises as an important field of study [60,20,38]. In particular, operations with dense matrices have been the subject of intensive research [38,8,9,23,59,10], whereas the problem of operating with sparse matrices has comparatively received less attention.…”
Section: Introductionmentioning
confidence: 99%
“…In [4] the focus is on reducing the number of accesses for particular matrix operations and defining a more natural mapping between the indices in the physical value matrix and the logical sparse coefficient matrix. In [5] the authors propose an approach to customise the matrix representation according to the specific sparseness characteristics of matrices and the target machine by performing register and cache level optimisations, for example by adding explicit zeros to improve the memory system behaviour.…”
Section: Related Workmentioning
confidence: 99%
“…Single-processor architectures have evolved into multi-core architectures and, in recent years, Graphics Processing Units (GPUs) have emerged as co-processors capable of handling large amount of calculations. In this context, the parallel processing of matrix operations in distributed memory architectures arises as an important field of study [4]- [6]. In particular, the operations with dense matrices have been the subject of intensive research.…”
Section: Introductionmentioning
confidence: 99%