1986
DOI: 10.2307/2007992
|View full text |Cite
|
Sign up to set email alerts
|

Newton's Method for the Matrix Square Root

Abstract: One approach to computing a square root of a matrix A is to apply Newton's method to the quadratic matrix equation F(X) = X2-A =0. Two widely-quoted matrix square root iterations obtained by rewriting this Newton iteration are shown to have excellent mathematical convergence properties. However, by means of a perturbation analysis and supportive numerical examples, it is shown that these simplified iterations are numerically unstable. A further variant of Newton's method for the matrix square root, recently pr… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

2
83
0

Year Published

1987
1987
2023
2023

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 82 publications
(88 citation statements)
references
References 16 publications
2
83
0
Order By: Relevance
“…The problem of computing the principal square root of a matrix is associated with a well-known problem of estimation theory, namely the problem of solving a continuous time Riccati equation. The Riccati equation arises in linear estimation [3], namely in the implementation of the Kalman-Bucy filter and it is formulated as…”
Section: Algorithm 5(a) This Algorithm Is Proposed In [5]mentioning
confidence: 99%
See 1 more Smart Citation
“…The problem of computing the principal square root of a matrix is associated with a well-known problem of estimation theory, namely the problem of solving a continuous time Riccati equation. The Riccati equation arises in linear estimation [3], namely in the implementation of the Kalman-Bucy filter and it is formulated as…”
Section: Algorithm 5(a) This Algorithm Is Proposed In [5]mentioning
confidence: 99%
“…Due to the importance of the problem, many iterative algorithms have been proposed and successfully employed for calculating the principal square root of a matrix [1][2][3][4][5][6][7] without seeking the eigenvalues and eigenvectors of the matrix; these algorithms require matrix inversion at every iteration. Blocked Schur Algorithms for Computing the Matrix Square Root are proposed in [8] where the matrix is reduced to upper triangular form and a recurrence relation enables the square root of the triangular matrix to be computed a column or superdiagonal at a time.…”
Section: Introductionmentioning
confidence: 99%
“…Next, we seek for local optimization methods that approximate (18b) using first-order Taylor series, so as to make problem (18) able to be solved by a sequence of convex problems. Applying Newton's method [19] to the quadratic matrix equation…”
Section: Design Of Time and Color Multiplexing Codesmentioning
confidence: 99%
“…where ∆F denotes a small change in F, and F k denotes the estimate of F at iteration k. The details of how we derive (19) is provided in the SM 1 . Since the newly updated F k+1 should be feasible to (18), F k + ∆F should also fulfill (18a).…”
Section: Design Of Time and Color Multiplexing Codesmentioning
confidence: 99%
“…The orthogonal case has been studied by Schonemann [13] and the Higham [8] and the symmetric case by Higham [10]. These problems are known in the literature as the orthogonal and symmetric Procrustes problems respectively.…”
Section: Introductionmentioning
confidence: 99%