Proceedings of the Conference on High Performance Computing Networking, Storage and Analysis 2009
DOI: 10.1145/1654059.1654061
|View full text |Cite
|
Sign up to set email alerts
|

Sparse matrix factorization on massively parallel computers

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
14
0
1

Year Published

2010
2010
2019
2019

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 22 publications
(16 citation statements)
references
References 26 publications
1
14
0
1
Order By: Relevance
“…This only needs to be done once, as long as the kinematic graph of the simulated system does not change. For parallel computation, a Nested Dissection ordering is more adequate, but gives similar results with respect to the amount of fill-in [GKG09].…”
Section: Fill-in Of the Cholesky Factorizationmentioning
confidence: 96%
“…This only needs to be done once, as long as the kinematic graph of the simulated system does not change. For parallel computation, a Nested Dissection ordering is more adequate, but gives similar results with respect to the amount of fill-in [GKG09].…”
Section: Fill-in Of the Cholesky Factorizationmentioning
confidence: 96%
“…Task graphs have been used for parallelization in dense linear algebra [38,39], sparse linear algebra [17], and linear transforms [40]. The KDG is a generalization of classic task graphs because it can handle the creation of new tasks and changes in dependences during execution.…”
Section: Related Workmentioning
confidence: 99%
“…These are either multifrontal [16] and supernodal techniques [34]. Multifrontal techniques are basically the multifrontal massively parallel solver (MUMPS) [28] and the Watson Sparse Matrix package (WSMP) [40]. SuperLUDIST [58] is a MPI parallel version of SuperLU family of solvers for unsymmetric systems based on supernodal right LU factorization.…”
Section: Algebraic Solversmentioning
confidence: 99%
“…[16], which is supported by MPI. The WSMP software package WSMP [40] is also developed using an hybrid implementation with MPI and Pthreads, among others. Abstracting the solver interface in this manner allows the application writer to choose the most effective solver for the particular problem.…”
Section: Librariesmentioning
confidence: 99%