2019
DOI: 10.1016/j.amc.2019.06.017
|View full text |Cite
|
Sign up to set email alerts
|

Distributed fast boundary element methods for Helmholtz problems

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
10
0

Year Published

2019
2019
2021
2021

Publication Types

Select...
3
2

Relationship

2
3

Authors

Journals

citations
Cited by 7 publications
(13 citation statements)
references
References 29 publications
0
10
0
Order By: Relevance
“…The solver is parallelized using MPI in the distributed memory. The distribution of the system matrices is based on [5,8,9]. The space-time boundary mesh is decomposed into time slices which define blocks in the system matrices.…”
Section: Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…The solver is parallelized using MPI in the distributed memory. The distribution of the system matrices is based on [5,8,9]. The space-time boundary mesh is decomposed into time slices which define blocks in the system matrices.…”
Section: Resultsmentioning
confidence: 99%
“…, G P−1 . In [5,8] we employ a cyclic decomposition algorithm -first, a generator graph G 0 on a minimal number of vertices (corresponding to blocks to be assembled by the process 0) is constructed; the remaining graphs G 1 , . .…”
Section: Distributed Memory Parallelizationmentioning
confidence: 99%
See 1 more Smart Citation
“…The parallelization of the method based on the cyclic graph decomposition was presented in [13] where only certain special numbers of processors were discussed. In [12] we further extended the approach to support general number of processors. In the following section we recollect its basic principle and extend it to support the solution of MTF systems.…”
Section: Parallel Acamentioning
confidence: 99%
“…In Sect. 3 we propose a strategy to parallelize the assembly of the MTF matrix blocks and their application in an iterative solver based on the approach presented in [11][12][13] for single domain problems. Except for the distributed parallelism, the method takes full advantage of the BEM4I library [14,20,21] and its assemblers parallelized in shared memory and vectorized by OpenMP.…”
Section: Introductionmentioning
confidence: 99%