Abstract. We present a variant of the AINV factorized sparse approximate inverse algorithm which is applicable to any symmetric positive definite matrix. The new preconditioner is breakdownfree and, when used in conjunction with the conjugate gradient method, results in a reliable solver for highly ill-conditioned linear systems. We also investigate an alternative approach to a stable approximate inverse algorithm, based on the idea of diagonally compensated reduction of matrix entries. The results of numerical tests on challenging linear systems arising from finite element modeling of elasticity and diffusion problems are presented.Key words. sparse linear systems, finite element matrices, preconditioned conjugate gradients, factorized sparse approximate inverses, incomplete conjugation, stabilized AINV, diagonally compensated reduction AMS subject classifications. Primary, 65F10, 65N22, 65F50; Secondary, 15A06 PII. S10648275993569001. Introduction. We consider the solution of sparse linear systems Ax = b, where A is a symmetric and positive definite (SPD) matrix, by the preconditioned conjugate gradient method. In the last few years there has been considerable interest in explicit preconditioning techniques based on directly approximating A −1 with a sparse matrix M ; see, e.g., [7] Although the main motivation for the development of sparse approximate inverse preconditioners comes from parallel processing, it is becoming clear that these techniques are also of interest because of their robustness. Sparse approximate inverses are often applicable to difficult problems where other preconditioners may break down [4]. For instance, incomplete factorization preconditioners, while widely popular and fairly robust, are not always reliable, in that the incomplete factorization process may
Properties of the sum of the q algebraically largest eigenvalues of any real symmetric matrix as a function of the diagonal entries of the matrix are derived. Such a sum is convex but not necessarily everywhere differentiable. A convergent procedure is presented for determining a minimizing point of any such sum subject to the condition that the trace of the matrix is held constant. An implementation of this procedure is described and numerical results are included.Minimization problems of this kind arose in graph partitioning studies [8]. Use of existing procedures for minimizing required either a strategy for selecting, at each stage, a direction of search from the subdifferential and an appropriate step along the direction chosen [-10,13] or computationally feasible characterizations of certain enlargements of subdifferentials [1,6] neither of which could be easily determined for the given problem. The arguments use results from eigenelement analysis and from optimization theory.
Many engineering applications require the computation of the q algebraically largest eigenvalues and a corresponding eigenspace of a large, sparse, real, symmetric matrix. An iterative, block version of the symmetric Lanczos algorithm has been developed for this computation. There are no restrictions on the sparsity pattern within the matrix or on the distribution of the eigenvalues of the matrix. Zero eigenvalues, eigenvalues equal in magnitude but opposite in sign, and multiple eigenvalues can all be handled directly by the procedure.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.