Abstract:Summary
We describe a randomized Krylov‐subspace method for estimating the spectral condition number of a real matrix A or indicating that it is numerically rank deficient. The main difficulty in estimating the condition number is the estimation of the smallest singular value
σmin of A. Our method estimates this value by solving a consistent linear least squares problem with a known solution using a specific Krylov‐subspace method called LSQR. In this method, the forward error tends to concentrate in the dire… Show more
“…We start by deriving an auxiliary lemma that will be used to prove that in Algorithm 3.1 the unstructured perturbation converges to zero. For a given matrix T , define κ(T ) := κ F (T ) = T • T † to be a Frobenius condition number of T , e.g., see references [2,5,28]. We recall that the matrix T † denotes the pseudoinverse (the Moore-Penrose inverse) of T , see e.g., [19].…”
Section: Elimination Of the Unstructured Perturbationmentioning
A number of theoretical and computational problems for matrix polynomials are solved by passing to linearizations. Therefore a perturbation theory, that relates perturbations in the linearization to equivalent perturbations in the corresponding matrix polynomial, is needed. In this paper we develop an algorithm that finds which perturbation of matrix coefficients of a matrix polynomial corresponds to a given perturbation of the entire linearization pencil. Moreover we find transformation matrices that, via strict equivalence, transform a perturbation of the linearization to the linearization of a perturbed polynomial. For simplicity, we present the results for the first companion linearization but they can be generalized to a broader class of linearizations.
“…We start by deriving an auxiliary lemma that will be used to prove that in Algorithm 3.1 the unstructured perturbation converges to zero. For a given matrix T , define κ(T ) := κ F (T ) = T • T † to be a Frobenius condition number of T , e.g., see references [2,5,28]. We recall that the matrix T † denotes the pseudoinverse (the Moore-Penrose inverse) of T , see e.g., [19].…”
Section: Elimination Of the Unstructured Perturbationmentioning
A number of theoretical and computational problems for matrix polynomials are solved by passing to linearizations. Therefore a perturbation theory, that relates perturbations in the linearization to equivalent perturbations in the corresponding matrix polynomial, is needed. In this paper we develop an algorithm that finds which perturbation of matrix coefficients of a matrix polynomial corresponds to a given perturbation of the entire linearization pencil. Moreover we find transformation matrices that, via strict equivalence, transform a perturbation of the linearization to the linearization of a perturbed polynomial. For simplicity, we present the results for the first companion linearization but they can be generalized to a broader class of linearizations.
“…The second approach: when A is a general large matrix, it is unaffordable to apply (A T A) −1 . Avron, Druinsky and Toledo [1] propose a randomized Krylov subspace method to estimate the condition number of a matrix A. In their method, a consistent linear least squares problem, whose solution is generated randomly, is solved iteratively by the LSQR algorithm [4], and the smallest singular value of A is estimated by σ min (A) ≈ Ae e with e being the error of the approximate solution and the exact one.…”
Section: Accuracy Of the Generalized Singular Vectorsmentioning
confidence: 99%
“…In their method, a consistent linear least squares problem, whose solution is generated randomly, is solved iteratively by the LSQR algorithm [4], and the smallest singular value of A is estimated by σ min (A) ≈ Ae e with e being the error of the approximate solution and the exact one. We refer the reader to [1] for details.…”
Section: Accuracy Of the Generalized Singular Vectorsmentioning
confidence: 99%
“…For the matrix pair (A, B) of each problem, we recover all the computed GSVD components ( α, β , u, v, x) and ( α, β , u, v, x) from the computed eigenpairs of the augmented matrix pairs ( A, B) and ( B, A), respectively, i.e., ( σ , y) and ( 1 σ , z), which are obtained by applying the Matlab built-in function eig to (1.4) and (1.5), respectively. The "exact" GSVD components (α, β , u, v, x) are computed by applying the Matlab built-in function gsvd to (A, B) 1 . We then measure the accuracy of the computed generalized singular values by their chordal distances from their exact counterparts and measure the accuracy of the computed generalized singular vectors by the sines of the angles between them and their exact counterparts.…”
For the computation of the generalized singular value decomposition (GSVD) of a matrix pair (A, B) of full column rank, the GSVD is commonly formulated as two mathematically equivalent generalized eigenvalue problems, so that a generalized eigensolver can be applied to one of them and the desired GSVD components are then recovered from the computed generalized eigenpairs. Our concern in this paper is which formulation of the generalized eigenvalue problems is preferable to compute the desired GSVD components more accurately. A detailed perturbation analysis is made on the two formulations and show how to make a suitable choice between them. Numerical experiments illustrate the obtained results.
KeywordsGeneralized singular value decomposition • generalized singular value • generalized singular vector • generalized eigenpair • eigensolver • perturbation analysis • condition number Mathematics Subject Classification (2010) 65F15 • 65F35 • 15A12 • 15A18 • 15A42
“…This lack of robustness, even in the case of positive definite A, results in part from the fact that the research community does not have robust methods for estimating condition numbers of large sparse matrices, which makes proxies for preconditioner quality necessary. For instance, Avron et al [4] recently produced a condition number estimator in this setting which appears to perform admirably in many situations but does not always converge and at this point does not have rigorous theoretical backing.…”
The task of choosing a preconditioner M to use when solving a linear system Ax = b with iterative methods is difficult. For instance, even if one has access to a collection M 1 , M 2 , . . . , M n of candidate preconditioners, it is currently unclear how to practically choose the M i which minimizes the number of iterations of an iterative algorithm to achieve a suitable approximation to x. This paper makes progress on this sub-problem by showing that the preconditioner stability I − M −1 A F , known to forecast preconditioner quality, can be computed in the time it takes to run a constant number of iterations of conjugate gradients through use of sketching methods. This is in spite of folklore which suggests the quantity is impractical to compute, and a proof we give that ensures the quantity could not possibly be approximated in a useful amount of time by a deterministic algorithm. Using our estimator, we provide a method which can provably select the minimal stability preconditioner among n candidates using floating point operations commensurate with running on the order of n log n steps of the conjugate gradients algorithm. Our method can also advise the practitioner to use no preconditioner at all if none of the candidates appears useful. The algorithm is extremely easy to implement and trivially parallelizable. In one of our experiments, we use our preconditioner selection algorithm to create to the best of our knowledge the first preconditioned method for kernel regression reported to never use more iterations than the non-preconditioned analog in standard tests.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.