2011 IEEE Seventh International Conference on eScience 2011
DOI: 10.1109/escience.2011.60
|View full text |Cite
|
Sign up to set email alerts
|

Design of a Parallel Hybrid Direct/Iterative Solver for CFD Problems

Abstract: Abstract-We discuss the parallel implementation of a hybrid direct/iterative solver for a special class of saddle point matrices arising from the discretization of the steady Navier-Stokes equations on an Arakawa C-grid, the F -matrices.The two-level method described here has the following properties: (i) it is very robust, even hat comparatively high Reynolds Numbers; (ii) a single parameter controls fill and convergence, making the method straightforward to use; (iii) the convergence rate is independent of t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
6
0

Year Published

2015
2015
2020
2020

Publication Types

Select...
4

Relationship

1
3

Authors

Journals

citations
Cited by 4 publications
(6 citation statements)
references
References 12 publications
0
6
0
Order By: Relevance
“…Rather than reviewing the method and theory in detail, we only briefly present it here. For details, see Wubs and Thies 24 and Thies and Wubs 25 …”
Section: The Two‐level Ilu Preconditionermentioning
confidence: 99%
See 1 more Smart Citation
“…Rather than reviewing the method and theory in detail, we only briefly present it here. For details, see Wubs and Thies 24 and Thies and Wubs 25 …”
Section: The Two‐level Ilu Preconditionermentioning
confidence: 99%
“…In this article, we present a novel multilevel preconditioning method which is specially designed for the 3D Navier‐Stokes equations. In Section 2, we first describe the two‐level ILU preconditioner as introduced in Wubs and Thies 24 and Thies and Wubs 25 . After this, we generalize the two‐level method to a multilevel method in Section 3.…”
Section: Introductionmentioning
confidence: 99%
“…One common practice to extract parallelism is level scheduling, such as wavefront ordering, but its benefit is limited by synchronization overhead and still insufficient parallelism. Other techniques such as coloring or graph partitioning allow efficient coarse-grained to mid-grained parallelism with little overhead (see, for example [2][3][4][5][6]), at the expense of slightly reduced mathematical efficiency. More detailed reviews can be found in [7][8][9].…”
Section: Introductionmentioning
confidence: 99%
“…Iterative solvers are highly memory efficient and easier to be parallelized on the OpenMP or shared memory platforms [3]. Furthermore, iteration methods often require custom numerical tools such as preconditioning techniques to be efficient [4]. In addition, in problems involving highly ill-conditioned matrices, iterative solvers suffer from well performance.…”
Section: Introductionmentioning
confidence: 99%