1992
DOI: 10.1137/0913011
|View full text |Cite
|
Sign up to set email alerts
|

A Set of New Mapping and Coloring Heuristics for Distributed-Memory Parallel Processors

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
10
0

Year Published

1996
1996
1999
1999

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 27 publications
(10 citation statements)
references
References 39 publications
0
10
0
Order By: Relevance
“…Two basic types of operations are repeatedly performed at each iteration. These are linear operations on dense vectors and sparse-matrix vector product (SpMxV) of the form y Ax, where A is an mÂm square matrix with the same sparsity structure as the coefficient matrix [3], [5], [8], [35], and y and x are dense vectors. Our goal is the parallelization of the computations in the iterative solvers through rowwise or columnwise decomposition of the A matrix as where processor k owns row stripe A r k or column stripe A k , respectively, for a parallel system with u processors.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Two basic types of operations are repeatedly performed at each iteration. These are linear operations on dense vectors and sparse-matrix vector product (SpMxV) of the form y Ax, where A is an mÂm square matrix with the same sparsity structure as the coefficient matrix [3], [5], [8], [35], and y and x are dense vectors. Our goal is the parallelization of the computations in the iterative solvers through rowwise or columnwise decomposition of the A matrix as where processor k owns row stripe A r k or column stripe A k , respectively, for a parallel system with u processors.…”
Section: Introductionmentioning
confidence: 99%
“…In columnwise decomposition, processor k is responsible for computing y k A k x k (where y u kI y k ) and the linear operations on the kth blocks of the vectors. With these decomposition schemes, the linear vector operations can be easily and efficiently parallelized [3], [35] such that only the inner-product computations introduce global communication overhead of which its volume does not scale up with increasing problem size. In parallel SpMxV, the rowwise and columnwise decomposition schemes require communication before or after the local SpMxV computations, thus they can also be considered as pre-and post-communication schemes, respectively.…”
Section: Introductionmentioning
confidence: 99%
“…For details on the used orderings, see [7]. Also, see [5] for an interesting comparison of scalable orderings. The orderings used in this comparison are the following:…”
Section: Numerical Resultsmentioning
confidence: 99%
“…See [5] for some coloring strategies well adapted to distributed-memory parallel computers. A multi-coloring algorithm applied to the graph of Figure 1 is shown in Figure 3, as well as its associated lower triangular matrix.…”
Section: Blocking Algorithms For Sparse Triangular Systemsmentioning
confidence: 99%
“…A simple and attractive mapping method considered by many researchers (see [14,22,[37][38][39]) is the so-called data strip or block partitioning heuristic. This heuristic is referred to under different names, some of them are: one-dimensional (1D) strip partitioning, twodimensional (2D) strip partitioning, multilevel load balanced method, median splitting, sector splitting, and block partitioning algorithm.…”
Section: P × Q Partitioning Algorithmmentioning
confidence: 99%