1975
DOI: 10.1137/0712047
|View full text |Cite
|
Sign up to set email alerts
|

Solution of Sparse Indefinite Systems of Linear Equations

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

1
1,020
0
13

Year Published

1981
1981
2014
2014

Publication Types

Select...
8

Relationship

0
8

Authors

Journals

citations
Cited by 1,402 publications
(1,034 citation statements)
references
References 11 publications
1
1,020
0
13
Order By: Relevance
“…Today there is a large community of researchers dedicated to the task of solving Cx = b and the spectral properties of C, appropriate iterative solvers and preconditioners have been well studied, [6]. The appearance of the zero matrix in the (2,2) block and the fact that A is positive definite mean that C falls into a relatively easy class of saddle point matrices, for which the minimal residual method (minres, [25]) is an optimal iterative solver. Convergence can be accelerated using symmetric and positive definite preconditioners, of which there are two well-known types.…”
mentioning
confidence: 99%
“…Today there is a large community of researchers dedicated to the task of solving Cx = b and the spectral properties of C, appropriate iterative solvers and preconditioners have been well studied, [6]. The appearance of the zero matrix in the (2,2) block and the fact that A is positive definite mean that C falls into a relatively easy class of saddle point matrices, for which the minimal residual method (minres, [25]) is an optimal iterative solver. Convergence can be accelerated using symmetric and positive definite preconditioners, of which there are two well-known types.…”
mentioning
confidence: 99%
“…This is the case when we use a Krylov subspace method starting with a zero vector. For example, the conjugate gradient (CG) method on the normal equation leads to the min-length solution (see Paige and Saunders [20]). In practice, CGLS [16] or LSQR [21] are preferable because they are equivalent to applying CG to the normal equation in exact arithmetic but they are numerically more stable.…”
Section: Least Squares Solversmentioning
confidence: 99%
“…Finally, after it is decided that the estimate of the residual norm is small enough, the final factorization of Hm will be used to fully solve the system (3.4). The Gaussian elimination with partial pivoting gives satisfactory results in general, but one might as well use a more stable decomposition, as the LQ decomposition in [14], [15], although at a high cost.…”
mentioning
confidence: 99%
“…When the matrix A is symmetric, then, by taking/» = 2, we obtain a version of the conjugate gradient method which is known to be equivalent to the Lanczos algorithm; see [14]. In that case the vectors vx, .…”
mentioning
confidence: 99%
See 1 more Smart Citation