1994
DOI: 10.1007/bf02521112
|View full text |Cite
|
Sign up to set email alerts
|

A parallel implementation of the restarted GMRES iterative algorithm for nonsymmetric systems of linear equations

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
11
0

Year Published

1997
1997
2019
2019

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 32 publications
(11 citation statements)
references
References 10 publications
0
11
0
Order By: Relevance
“…A sufficiently large grid will allow this to happen; in our specific case we are interested when the ratio M/N is large, since that will maximize the amount of local computation for a given p while keeping small (relative to the local computation) the communication time. This approach has been successfully applied in other parallel applications (see [16][17][18][19]). However, there are two penalties brought about by the parallel computation of Equations (20)- (23): 1. the amount of time needed to set up the asynchronous sends and the retrieval of data from the local communications buffer into the appropriate memory locations of the user's program; 2. the computation of reductions, needed to obtain the norms used in the stopping criteria of the iterations.…”
Section: Parallelization Of the Methodsmentioning
confidence: 98%
See 1 more Smart Citation
“…A sufficiently large grid will allow this to happen; in our specific case we are interested when the ratio M/N is large, since that will maximize the amount of local computation for a given p while keeping small (relative to the local computation) the communication time. This approach has been successfully applied in other parallel applications (see [16][17][18][19]). However, there are two penalties brought about by the parallel computation of Equations (20)- (23): 1. the amount of time needed to set up the asynchronous sends and the retrieval of data from the local communications buffer into the appropriate memory locations of the user's program; 2. the computation of reductions, needed to obtain the norms used in the stopping criteria of the iterations.…”
Section: Parallelization Of the Methodsmentioning
confidence: 98%
“…Also, note that the initial condition for w is the solution of Equation (14); as it is solved in the form of (17) the computation can be organized such that one data exchange is suppressed, since for the first iterations (i.e. for k = 0 at t = 0) a processor will have already received columns w :,m and w :,1 from its left and right neighbouring processors, which is done in the last iteration prior to convergence using (17).…”
Section: Parallelization Of the Methodsmentioning
confidence: 99%
“…When evaluating the matrix-vector products with A or solving the systems with Q, the special form of Equation (13) can be exploited in order to minimize the computational e ort. A detailed explanation of the corresponding implementation of the restarted GMRES method can be found in Reference [20].…”
Section: Preconditioned Restarted Gmres Methodsmentioning
confidence: 99%
“…(7), which provides a fast and monotonically decreasing con- vergence [6][7][8]. However, the restarted GMRES suffers from the poor scalability in parallelization [18] and hence, is replaced by a CG based method such as the stabilized biconjugate gradient (BiCGSTAB) method in this work for a cheaper memory usage, a comparable convergence, and a better parallel scalability. When applying the BiCGSTAB method to the iterative solution of the interface Eq.…”
Section: Iterative Solver For the Interface Problemmentioning
confidence: 98%