2003
DOI: 10.1016/s0167-8191(03)00067-x
|View full text |Cite
|
Sign up to set email alerts
|

Impact of the implementation of MPI point-to-point communications on the performance of two general sparse solvers

Abstract: The current concept for high-level radioactive waste disposal at Yucca Mountain is for the waste to be placed in underground tunnels (or drifts) in the middle of a thick unsaturated zone.Flow modeling and field testing have shown that not all flow encountering a drift will seep into the drift. The underlying reason for the diversion of unsaturated flow around a drift is that capillary forces in the fractures and matrix prevent water entry into the drift unless the capillary pressure in the rock decreases suffi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
11
0

Year Published

2005
2005
2022
2022

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 10 publications
(11 citation statements)
references
References 15 publications
0
11
0
Order By: Relevance
“…For instance, Amestoy et al studied the impact of the MPI buffering implementation on the performance of sparse matrix solvers [36]. Hunold et al proposed multilevel hierarchical matrix multiplication to improve the application performance on the PC cluster [37].…”
Section: Related Workmentioning
confidence: 99%
“…For instance, Amestoy et al studied the impact of the MPI buffering implementation on the performance of sparse matrix solvers [36]. Hunold et al proposed multilevel hierarchical matrix multiplication to improve the application performance on the PC cluster [37].…”
Section: Related Workmentioning
confidence: 99%
“…Both methods have the same total communication volume but the multifrontal method requires fewer messages. In a subsequent paper (Amestoy, Duff, L'Excellent and Li 2003a), they show how MPI implementations affect both solvers. Li (2005) provides an overview of all three SuperLU variants: (1) the left-looking sequential SuperLU, (2) the left-looking parallel shared-memory SuperLU MT, and (3) the right-looking parallel distributed-memory Su-perLU DIST.…”
Section: Supernodal Lu Factorizationmentioning
confidence: 99%
“…Amestoy et al (2001b) compare their two distributed-memory approaches and software: SuperLU DIST, which relies on a synchronous supernodal method with static pivoting and iterative refinement (discussed in Section 9.2), and MUMPS, which uses an asynchronous multifrontal method with partial threshold and delayed pivoting. Amestoy et al (2003a) report their experience of using MPI point-topoint communications on the performance of their respective solvers. They present challenges and solutions on the use of buffered asynchronous message transmission and reception in MPI.…”
Section: Distributed-memory Parallel Ldl T and Lu Factorizationmentioning
confidence: 99%
“…For instance, Amestoy et al studied the impact of the MPI buffering implementation on the performance of sparse matrix solvers [2]. Chakrabarti and Yelick investigated application-controlled consistency mechanisms to minimize synchronization and communication overhead for solving the Gröbner basis problem on distributed memory machines [4].…”
Section: Related Workmentioning
confidence: 99%