1981
DOI: 10.1090/s0025-5718-1981-0616364-6
|View full text |Cite
|
Sign up to set email alerts
|

Krylov subspace methods for solving large unsymmetric linear systems

Abstract: Abstract. Some algorithms based upon a projection process onto the Krylov subspace Km = Span(r0, Ar^, .. . ,Am~>r¿) are developed, generalizing the method of conjugate gradients to unsymmetric systems. These methods are extensions of Arnoldi's algorithm for solving eigenvalue problems. The convergence is analyzed in terms of the distance of the solution to the subspace Km and some error bounds are established showing, in particular, a similarity with the conjugate gradient method (for symmetric matrices) when … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
29
0

Year Published

1994
1994
2020
2020

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 381 publications
(29 citation statements)
references
References 15 publications
0
29
0
Order By: Relevance
“…The essential feature of efficient iterative methods and in particular KISMs is that the system matrix A is not explicitly involved in the process but rather its multiplication with a vector. This is particularly useful for sparse matrices for which storage is no longer a critical problem as in direct methods (Saad 1996). However, the convergence rate of most iterative methods has been found to depend strongly on the proper choice of the pre-conditioner, and hence indirectly on on the mesh size.…”
Section: Methods Of Solutionmentioning
confidence: 99%
“…The essential feature of efficient iterative methods and in particular KISMs is that the system matrix A is not explicitly involved in the process but rather its multiplication with a vector. This is particularly useful for sparse matrices for which storage is no longer a critical problem as in direct methods (Saad 1996). However, the convergence rate of most iterative methods has been found to depend strongly on the proper choice of the pre-conditioner, and hence indirectly on on the mesh size.…”
Section: Methods Of Solutionmentioning
confidence: 99%
“…The idea behind Krylov methods is to use repeated action of the Jacobian DF(U) on some initial velocity field to generate a set of fields that spans a relatively small Krylov subspace, in which a good approximate solution of a large linear problem can be found. Such methods are increasingly employed to solve the two major problems of numerical linear algebra: calculating eigenvectors [ 1,24,38,47,49,52,56,57] and solving linear equations [48,50,51,61]. A more recent innovation is the use of Krylov methods to solve systems of differential equations 1,22,23,39,45,54].…”
Section: Ou=f(u)= -(Uv) U+vv2u-vp (11)mentioning
confidence: 99%
“…With convex constraints that reflect our prior knowledge, the problem can be written as a convex optimization. The alternating direction method of multipliers (ADMM) [11,26,27] is a convex optimization tool, which recently received considerable attention for its ease of incorporating diverse convex constraints to the problem, ease of implementation, and fast computational speed.The Krylov subspace method [44,45] is a projection method which restricts the solution space of the problem g = Af to the subspace spanned by the nth order Krylov sequence x, Ax, A 2 x, . .…”
mentioning
confidence: 99%
“…The Krylov subspace method [44,45] is a projection method which restricts the solution space of the problem g = Af to the subspace spanned by the nth order Krylov sequence x, Ax, A 2 x, . .…”
mentioning
confidence: 99%
See 1 more Smart Citation