2015
DOI: 10.1007/s40819-015-0083-1
|View full text |Cite
|
Sign up to set email alerts
|

A Class of Kung–Traub-Type Iterative Algorithms for Matrix Inversion

Abstract: Kung-Traub (J ACM 21:643-651, 1974) constructed two optimal general iterative methods without memory for finding solution of nonlinear equations. In this work, we are going to show that one of them can be applied for matrix inversion. It is observed that the convergence order 2 m can be attained using 2m matrix-matrix multiplications. Moreover, a method with the efficiency index 10 1/6 ≈ 1.4677 will be furnished. To justify that our procedure works efficiently, some numerical problems are included.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2017
2017
2023
2023

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 7 publications
(5 citation statements)
references
References 17 publications
0
5
0
Order By: Relevance
“…which shows a convergence rate of four for Equation (14). Note that the norm of sign(A) can be large arbitrarily, though its eigenvalues are ±1.…”
Section: Error Estimatementioning
confidence: 83%
See 2 more Smart Citations
“…which shows a convergence rate of four for Equation (14). Note that the norm of sign(A) can be large arbitrarily, though its eigenvalues are ±1.…”
Section: Error Estimatementioning
confidence: 83%
“…Noting that the iterative solver in Equation (10) does not satisfy the Kung-Traub conjecture regarding the construction of iterative schemes without memory to solve equations [14], but it has an important feature. As a matter of fact, if we pursue the optimality of the iterative solvers for nonlinear equations, then we lose the global convergence behavior in solving Equation ( 5), which clearly limits the matrix application of such solvers.…”
Section: An Improved Mid-point Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Schulz-type methods for the calculation of the WMP inverse are sensitive to the choice of the initial value; that is, the initial choice of matrix must be close enough to the generalized inverse so as to guarantee the scheme to converge [20]. More precisely, convergence can only be observed if the starting matrix is chosen carefully.…”
Section: Literaturementioning
confidence: 99%
“…Methods for calculating the generalized inverses are a topic of current investigation in computational mathematics (see, e.g., [1,2]). Enormous amount of work in the topic of generalized inverses has been done during the past 63 years.…”
Section: Introductory Notesmentioning
confidence: 99%