2009
DOI: 10.1007/s00211-009-0240-8
|View full text |Cite
|
Sign up to set email alerts
|

Implicit standard Jacobi gives high relative accuracy

Abstract: We prove that the Jacobi algorithm applied implicitly on a decomposition A = X D X T of the symmetric matrix A, where D is diagonal, and X is well conditioned, computes all eigenvalues of A to high relative accuracy. The relative error in every eigenvalue is bounded by O( κ(X )), where is the machine precision and κ(X ) ≡ X 2 · X −1 2 is the spectral condition number of X . The eigenvectors are also computed accurately in the appropriate sense. We believe that this is the first algorithm to compute accurate ei… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

2
37
0

Year Published

2011
2011
2021
2021

Publication Types

Select...
7

Relationship

2
5

Authors

Journals

citations
Cited by 28 publications
(40 citation statements)
references
References 44 publications
(120 reference statements)
2
37
0
Order By: Relevance
“…In this section, we will use an approach completely different to the one in [46] to show that a relative perturbation bound similar to (4.1) holds for the eigenvalues of symmetric indefinite diagonally dominant matrices. This approach is inspired in the algorithm presented in [15] for computing, with high relative accuracy, the eigenvalues of any symmetric matrix expressed as a rank-revealing decomposition, and by its error analysis. The key point is that in the case of symmetric diagonally dominant matrices the strong perturbation bounds given in Theorem 2.4 for the LDL T factorization can be expressed as a small multiplicative perturbation of the original matrix, which allows us to apply the eigenvalue relative perturbation results developed in [17].…”
Section: Bounds For Eigenvalues Of Symmetric Matricesmentioning
confidence: 99%
See 1 more Smart Citation
“…In this section, we will use an approach completely different to the one in [46] to show that a relative perturbation bound similar to (4.1) holds for the eigenvalues of symmetric indefinite diagonally dominant matrices. This approach is inspired in the algorithm presented in [15] for computing, with high relative accuracy, the eigenvalues of any symmetric matrix expressed as a rank-revealing decomposition, and by its error analysis. The key point is that in the case of symmetric diagonally dominant matrices the strong perturbation bounds given in Theorem 2.4 for the LDL T factorization can be expressed as a small multiplicative perturbation of the original matrix, which allows us to apply the eigenvalue relative perturbation results developed in [17].…”
Section: Bounds For Eigenvalues Of Symmetric Matricesmentioning
confidence: 99%
“…Furthermore, this algorithm can be combined with the algorithms presented in [11] to compute the singular values with relative errors in the order of machine precision. In fact, the algorithm for the LDU factorization in [45] can be combined also with the algorithms in [7,15,16] to compute, with high relative accuracy, solutions to linear systems and solutions to least squares problems involving diagonally dominant matrices, and eigenvalues of symmetric diagonally dominant matrices.…”
mentioning
confidence: 99%
“…Throughout this proof L, D and U denote the exact (1) ). Note first that according to Lemma 8, one can consider that A is obtained from A by applying a sequence of n + 1 perturbations that can be: (1) of type (15) with δ = u (recall (19) and (20)); or, (2) perturbations that are reversals of type (15) …”
Section: Lemma 9 Use the Same Notation And Assumptions As In Lemma 8mentioning
confidence: 99%
“…The fundamental point in (3) is that the error bound is governed by the condition numbers of the well conditioned factors X and Y , and not by κ(A) which may be extremely large. Symmetric RRDs have been also used to compute accurate eigenvalues and eigenvectors of symmetric matrices [21,18,19]. Very recently [20], it has been shown that computing an accurate RRD, in the sense of (1)- (2), of an n × n matrix A also leads to very important benefits in the accuracy of the numerical solution of the system Ax = b.…”
Section: Introductionmentioning
confidence: 99%
“…Here, U is a unitary matrix of order m, while V is J-unitary, (i.e., V * JV = J) of order n. The HSVD in (2.10) can be computed by orthogonalization of, either the of columns G * by trigonometric rotations [12], or the columns of G by hyperbolic rotations [41].…”
mentioning
confidence: 99%