An efficient refinement algorithm is proposed for symmetric eigenvalue problems. The structure of the algorithm is straightforward, primarily comprising matrix multiplications. We show that the proposed algorithm converges quadratically if a modestly accurate initial guess is given, including the case of multiple eigenvalues. Our convergence analysis can be extended to Hermitian matrices. Numerical results demonstrate excellent performance of the proposed algorithm in terms of convergence rate and overall computational cost, and show that the proposed algorithm is considerably faster than a standard approach using multiple-precision arithmetic.
We are concerned with accurate eigenvalue decomposition of a real symmetric matrix A. In the previous paper (Ogita and Aishima in Jpn J Ind Appl Math 35(3): 1007-1035, 2018), we proposed an efficient refinement algorithm for improving the accuracy of all eigenvectors, which converges quadratically if a sufficiently accurate initial guess is given. However, since the accuracy of eigenvectors depends on the eigenvalue gap, it is difficult to provide such an initial guess to the algorithm in the case where A has clustered eigenvalues. To overcome this problem, we propose a novel algorithm that can refine approximate eigenvectors corresponding to clustered eigenvalues on the basis of the algorithm proposed in the previous paper. Numerical results are presented showing excellent performance of the proposed algorithm in terms of convergence rate and overall computational cost and illustrating an application to a quantum materials simulation. In [17], we proposed a refinement algorithm for the eigenvalue decomposition of A, which works not for an individual eigenvector but for all eigenvectors. Since the algorithm is based on Newton's method, it converges quadratically, provided that an initial guess is sufficiently accurate. In practice, although the algorithm refines computed eigenvectors corresponding to sufficiently separated simple eigenvalues, it cannot refine computed eigenvectors corresponding to "nearly" multiple eigenvalues. This is because it is difficult for standard numerical algorithms in floating-point arithmetic to provide sufficiently accurate initial approximate eigenvectors corresponding to nearly multiple eigenvalues as shown in (2). The purpose of this paper is to remedy this problem, i.e., we aim to develop a refinement algorithm for the eigenvalue decomposition of a symmetric matrix with clustered eigenvalues.We briefly explain the idea of our proposed algorithm. We focus on the so-called sin θ theorem by Davis-Kahan [5, Section 2] as follows. For an index set J with |J | = < n, let X J ∈ R n× denote the eigenvector matrix comprising x ( j) for all j ∈ J . For 1 ≤ k ≤ , let μ k denote the Ritz values for the subspace spanned by some given vectors with μ 1 ≤ · · · ≤ μ , and let z k be the corresponding normalized Ritz vectors. Assume that the eigenvalues λ i for all i / ∈ J are entirely outside of [μ 1 , μ ]. Let Gap denote the smallest difference between the Ritz values μ k for all k, 1 ≤ k ≤ , and the eigenvalues λ i for all i / ∈ J , i.e., Gap := min{|μ k −λ i | : 1 ≤ k ≤ , i / ∈ J }. Moreover, let Z J := [z 1 , . . . , z ] ∈ R n× . Then, we obtainThis indicates that the subspace spanned by eigenvectors associated with the clustered eigenvalues is not very sensitive to perturbations, provided that the gap between the clustered eigenvalues and the others is sufficiently large. That means backward stable algorithms can provide a sufficiently accurate initial guess of the "subspace" corresponding to the clustered eigenvalues. To extract eigenvectors from the subspace correctly, relatively larger ga...
Abstract. The dqds algorithm computes all the singular values of an n-by-n bidiagonal matrix to high relative accuracy in O(n 2 ) cost. Its efficient implementation is now available as a LAPACK subroutine and is the preferred algorithm for this purpose. In this paper we incorporate into dqds a technique called aggressive early deflation, which has been applied successfully to the Hessenberg QR algorithm. Extensive numerical experiments show that aggressive early deflation often reduces the dqds runtime significantly. In addition, our theoretical analysis suggests that with aggressive early deflation, the performance of dqds is largely independent of the shift strategy. We confirm through experiments that the zero-shift version is often as fast as the shifted version. We give a detailed error analysis to prove that with our proposed deflation strategy, dqds computes all the singular values to high relative accuracy.Key words. aggressive early deflation, dqds, singular values, bidiagonal matrix AMS subject classifications. 65F15, 15A181. Introduction. The differential quotient difference with shifts (dqds) algorithm computes all the singular values of an n-by-n bidiagonal matrix to high relative accuracy in O(n 2 ) cost [11]. Its efficient implementation has been developed and is now available as a LAPACK subroutine DLASQ [30]. Because of its guaranteed relative accuracy and efficiency, dqds has now replaced the QR algorithm [7], which had been the default algorithm to compute the singular values of a bidiagonal matrix. The standard way of computing the singular values of a general matrix is to first apply suitable orthogonal transformations to reduce the matrix to bidiagonal form, then use dqds [6]. dqds is also a major computational kernel in the MRRR algorithm for computing orthogonal eigenvectors of a symmetric tridiagonal matrix [8,9,10] and the singular value decomposition of a bidiagonal matrix [35] in O(n 2 ) cost. The aggressive early deflation strategy, introduced in [5], is known to greatly improve the performance of the Hessenberg QR algorithm for computing the eigenvalues of a general square matrix by deflating converged eigenvalues long before a conventional deflation strategy does. The primary contribution of this paper is the proposal of two deflation strategies for dqds based on aggressive early deflation. The first strategy is a direct specialization of aggressive early deflation to dqds. The second strategy, which takes full advantage of the bidiagonal structure of the matrix, is computationally more efficient. We present a detailed mixed forward-backward stability analysis that proves the second strategy guarantees high relative accuracy of all the computed singular values. The results of extensive numerical experiments demonstrate that performing aggressive early deflation significantly reduces the solution time of dqds in many cases. We observed speedups of up to a factor 50, and in all our experiments the second strategy was at least as fast as DLASQ for any matrix larger than 3000.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.