1989
DOI: 10.1137/0910073
|View full text |Cite
|
Sign up to set email alerts
|

Krylov Subspace Methods on Supercomputers

Abstract: This paper presents a short survey of recent research on Krylov subspace methods with emphasis on implementation on vector and parallel computers. Conjugate gradient methods have proven very useful on traditional scalar computers, and their popularity is likely to increase as three dimensional models gain importance. A conservative approach to derive effective iterative techniques for supercomputers has been to find efficient parallel/ vector implementations of the standard algorithms. The main source of diffi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
84
0
3

Year Published

1993
1993
2020
2020

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 205 publications
(89 citation statements)
references
References 96 publications
0
84
0
3
Order By: Relevance
“…All the matrix-vector products with the coefficient matrix A and with each of the (possibly factorized) approximate inverse preconditioners are vectorizable operations. To this end, after the approximate inverse preconditioners have been computed, they are transformed into the JAD, or jagged diagonal, format (see [56,68]). The same is done with the coefficient matrix A.…”
Section: Further Notes On Implementationsmentioning
confidence: 99%
“…All the matrix-vector products with the coefficient matrix A and with each of the (possibly factorized) approximate inverse preconditioners are vectorizable operations. To this end, after the approximate inverse preconditioners have been computed, they are transformed into the JAD, or jagged diagonal, format (see [56,68]). The same is done with the coefficient matrix A.…”
Section: Further Notes On Implementationsmentioning
confidence: 99%
“…In addition, the forward and back substitution steps, which must be conducted each time the preconditioner is applied, have been fully vectorized with a level-scheduling algorithm. 38 Vectorization is accomplished by keeping a list of all edges that contribute to the nodes in a given level and coloring those edges to allow vectorization. Numerical experiments with the level-scheduling algorithm indicate that the computer time required for the forward and backward substitutions is reduced by a factor of approximately 3.3 in two dimensions and by a factor of approximately 2.8 in three dimensions.…”
Section: Time-advancement Schemementioning
confidence: 99%
“…Thus, not only can the computational cost be dissipated over the lifetime of the learning algorithm, but some columns of the inverse might not be computed at all if the corresponding states are never visited. As a final note on computational matters, efficient techniques for storing the sparse matrix can be found in [19,36] and [3].…”
Section: Lstd-po Actor-critic For Pomdp With Indirectly Observed Costmentioning
confidence: 99%