2006
DOI: 10.1137/050628283
|View full text |Cite
|
Sign up to set email alerts
|

The Conditioning of Linearizations of Matrix Polynomials

Abstract: Abstract. The standard way of solving the polynomial eigenvalue problem of degree m in n × n matrices is to "linearize" to a pencil in mn × mn matrices and solve the generalized eigenvalue problem. For a given polynomial, P , infinitely many linearizations exist and they can have widely varying eigenvalue condition numbers. We investigate the conditioning of linearizations from a vector space DL(P ) of pencils recently identified and studied by Mackey, Mackey, Mehl, and Mehrmann. We look for the best condition… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

6
160
0
4

Year Published

2007
2007
2019
2019

Publication Types

Select...
4
2

Relationship

2
4

Authors

Journals

citations
Cited by 105 publications
(170 citation statements)
references
References 9 publications
(21 reference statements)
6
160
0
4
Order By: Relevance
“…Unless the block structure of the linearization is respected (and it is not by standard algorithms), the conditioning of the larger linear problem can be worse than that of the original matrix polynomial, since the class of admissible perturbations is larger. For example, eigenvalues that are wellconditioned for P(λ ) may be ill-conditioned for L(λ ) [39,41,78]. Ideally, when solving (20) via (21) we would like to have κ P (λ ) ≈ κ L (λ ).…”
Section: Impact On Numerical Practicementioning
confidence: 99%
See 1 more Smart Citation
“…Unless the block structure of the linearization is respected (and it is not by standard algorithms), the conditioning of the larger linear problem can be worse than that of the original matrix polynomial, since the class of admissible perturbations is larger. For example, eigenvalues that are wellconditioned for P(λ ) may be ill-conditioned for L(λ ) [39,41,78]. Ideally, when solving (20) via (21) we would like to have κ P (λ ) ≈ κ L (λ ).…”
Section: Impact On Numerical Practicementioning
confidence: 99%
“…This is done for example in [39] for pencils L ∈ DL(P), where minimization of the ratio over L is considered. Backward errors characterize the stability of a numerical method for solving a problem by measuring how far the problem has to be perturbed for an approximate solution to be an exact solution of the perturbed problem.…”
Section: Impact On Numerical Practicementioning
confidence: 99%
“…This will be a consequence of the following result. (12), with U and u partitioned as in (14), the relation…”
Section: Lemmamentioning
confidence: 99%
“…This yields an equivalent dn×dn linear eigenvalue problem L 0 −λL 1 , to which standard linear eigenvalue solvers like the QZ algorithm or the Arnoldi method [11] can be applied. Many different linearizations are possible, and much progress has been made in the last few years in developing a unified framework [21], which allows, e.g., for structure preservation [20] and a unified sensitivity analysis [14,1].…”
Section: Introductionmentioning
confidence: 99%
“…When working with matrix polynomials, it is standard to use a strong linearization to convert the polynomial eigenvalue problem to a generalized eigenvalue problem [3,14,15]. A strong linearization, as opposed to weaker linearizations, preserves eigenstructure at infinity as well as the finite eigenstructure.…”
Section: Reversing Polynomials Expressed In a Lagrange Basismentioning
confidence: 99%