We address the problem of the exact computation of two joint spectral characteristics of a family of linear operators, the joint spectral radius (in short JSR) and the lower spectral radius (in short LSR), which are well-known different generalizations to a set of operators of the usual spectral radius of a linear operator. In this article we develop a method which -under suitable assumptions -allows to compute the JSR and the LSR of a finite family of matrices exactly. We remark that so far no algorithm was available in the literature to compute the LSR exactly.The paper presents necessary theoretical results on extremal norms (and on extremal antinorms) of linear operators, which constitute the basic tools of our procedures, and a detailed description of the corresponding algorithms for the computation of the JSR and LSR (the last one restricted to families sharing an invariant cone). The algorithms are easily implemented and their descriptions are short.If the algorithms terminate in finite time, then they construct an extremal norm (in the JSR case) or antinorm (in the LSR case) and find their exact values; otherwise they provide upper and lower bounds that both converge to the exact values. A theoretical criterion for termination in finite time is also derived. According to numerical experiments, the algorithm for the JSR finds the exact value for the vast majority of matrix families in dimensions ≤ 20. For nonnegative matrices it works faster and finds JSR in dimensions of order 100 within a few iterations; the same is observed for the algorithm computing the LSR. To illustrate Keywords Linear operator · joint spectral radius · lower spectral radius · algorithm · polytope · extremal norm · antinorm.
The ε-pseudospectral abscissa and radius of an n × n matrix are, respectively, the maximal real part and the maximal modulus of points in its ε-pseudospectrum, defined using the spectral norm. Existing techniques compute these quantities accurately, but the cost is multiple singular value decompositions and eigenvalue decompositions of order n, making them impractical when n is large. We present new algorithms based on computing only the spectral abscissa or radius of a sequence of matrices, generating a sequence of lower bounds for the pseudospectral abscissa or radius. We characterize fixed points of the iterations, and we discuss conditions under which the sequence of lower bounds converges to local maximizers of the real part or modulus over the pseudospectrum, proving a locally linear rate of convergence for ε sufficiently small. The convergence results depend on a perturbation theorem for the normalized eigenprojection of a matrix as well as a characterization of the group inverse (reduced resolvent) of a singular matrix defined by a rankone perturbation. The total cost of the algorithms is typically only a constant times the cost of computing the spectral abscissa or radius, where the value of this constant usually increases with ε, and may be less than 10 in many practical cases of interest.
We consider the real ε-pseudospectrum of a real square matrix, which is the set of eigenvalues of all real matrices that are ε-close to the given matrix, where closeness is measured in either the 2-norm or the Frobenius norm. We characterize extremal points and compare the situation with that for the complex ε-pseudospectrum. We present differential equations for rank-1 and rank-2 matrices for the computation of the real pseudospectral abscissa and radius. Discretizations of the differential equations yield algorithms that are fast and well suited for sparse large matrices. Based on these low-rank differential equations, we further obtain an algorithm for drawing boundary sections of the real pseudospectrum with respect to both the 2-norm and the Frobenius norm.
Ordinary differential equations with discontinuous right-hand side, where the discontinuity of the vector field arises on smooth surfaces of the phase space, are the topic of this work. The main emphasis is the study of solutions close to the intersection of two discontinuity surfaces. There, the so-called hidden dynamics describes the smooth transition from ingoing to outgoing solution directions, which occurs instantaneously in the jump discontinuity of the vector field. This article presents a complete classification of such transitions (assuming the vector fields surrounding the intersection are transversal to it). Since the hidden dynamics is realized by standard space regularizations, much insight is obtained for them. One can predict, in the case of multiple solutions of the discontinuous problem, which solution (classical or sliding mode) will be approximated after entering the intersection of two discontinuity surfaces. A novel modification of space regularizations is presented that permits to avoid (unphysical) high oscillations and makes a numerical treatment more efficient.
Systems of implicit delay differential equations, including state-dependent problems neutral and differential-algebraic equations, singularly perturbed problems, and small or vanishing delays are considered. The numerical integration of such problems is very sensitive to jump discontinuities in the solution or in its derivatives (so-called breaking points). In this article we discuss a new strategy - peculiar to implicit schemes - that allows codes to detect automatically and then to compute very accurately those breaking points which have to be inserted into the mesh to guarantee the required accuracy. In particular for state-dependent delays, where breaking points are not known in advance, this treatment leads to a significant improvement in accuracy. As a theoretical result we obtain a general convergence theorem which was missing in the literature (see cite{BZ03}). Furthermore, as a useful by-product, we design strategies that are able to detect points of non-uniqueness or non-existence of the solution so that the code can terminate when such a situation occurs. A new version of the code RADAR5 together with drivers for some real-life problems is available on the homepages of the authors
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.