2013
DOI: 10.1007/978-3-642-36803-5_14
|View full text |Cite
|
Sign up to set email alerts
|

Use of Direct Solvers in TFETI Massively Parallel Implementation

Abstract: Abstract. The FETI methods blend iterative and direct solvers. The dual problem is solved iteratively using e.g. CG method; in each iteration, the auxiliary problems related to the application of an unassembled system matrix (subdomain problems' solutions and projector application in dual operator) are solved directly. The paper deals with the comparison of the direct solvers available in PETSc on the Cray XE6 machine HECToR (PETSc, MUMPS, SuperLU) regarding their performance in the two most time consuming act… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
22
0

Year Published

2014
2014
2023
2023

Publication Types

Select...
3
3
1

Relationship

2
5

Authors

Journals

citations
Cited by 23 publications
(24 citation statements)
references
References 7 publications
(7 reference statements)
0
22
0
Order By: Relevance
“…We have suggested and compared several strategies for parallel CP solution [11], [9]. The explicit orthonormalization approach starts to fail when the nullspace is large (thousands).…”
Section: Coarse Problemmentioning
confidence: 99%
“…We have suggested and compared several strategies for parallel CP solution [11], [9]. The explicit orthonormalization approach starts to fail when the nullspace is large (thousands).…”
Section: Coarse Problemmentioning
confidence: 99%
“…In typical DD implementations, this produces an unacceptable parallel efficiency loss, since all the cores not involved in the coarse solver computation are idling (see Figure 1). One obvious strategy to improve scalability is to reduce the wall-clock time spent at the coarse solver by using, e.g., a MPI-distributed sparse direct solver like MUMPS [9] (see [10] for BDDC and [11] for FETI-DP). However, this approach only mitigates the problem.…”
Section: Introductionmentioning
confidence: 99%
“…Introduction. Sparse matrix-matrix multiplication (SpGEMM) is a kernel operation in a wide variety of scientific applications such as finite element simulations based on domain decomposition [3,22], molecular dynamics (MD) [15,16,17,25,28,29,32,36], and linear programming (LP) [7,8,26], all of which utilize parallel processing technology to reduce execution times. Among these applications, below we exemplify three methods/codes from which we select realistic SpGEMM instances.…”
mentioning
confidence: 99%
“…Sparse matrix-matrix multiplication (SpGEMM) is a kernel operation in a wide variety of scientific applications such as finite element simulations based on domain decomposition [3,22], molecular dynamics (MD) [15,16,17,25,28,29,32,36], and linear programming (LP) [7,8,26], all of which utilize parallel processing technology to reduce execution times. Among these applications, below we exemplify three methods/codes from which we select realistic SpGEMM instances.In finite element application fields, finite element tearing and interconnecting (FETI) [3,22] type domain decomposition methods are used for numerical solution of engineering problems. In this application, the SpGEMM computation GG T is performed, where G = R T B T , R is the block diagonal basis of the stiffness matrix, and B is the signed matrix with entries −1, 0, 1 describing the subdomain interconnectivity.…”
mentioning
confidence: 99%
See 1 more Smart Citation