We present an alternative strategy to truncate the higher-order singular value decomposition (T-HOSVD). An error expression for an approximate Tucker decomposition with orthogonal factor matrices is presented, leading us to propose a novel truncation strategy for the HOSVD, which we refer to as the sequentially truncated higher-order singular value decomposition (ST-HOSVD). This decomposition retains several favorable properties of the T-HOSVD, while reducing the number of operations to compute the decomposition and practically always improving the approximation error. Three applications are presented, demonstrating the effectiveness of ST-HOSVD. In the first application, ST-HOSVD, T-HOSVD and Higher-Order Orthogonal Iteration (HOOI) are employed to compress a database of images of faces. On average, the ST-HOSVD approximation was only 0.1% worse than the optimum computed by HOOI, while cutting the execution time by a factor 20. In the second application, classification of handwritten digits, ST-HOSVD achieved a speedup of 50 over T-HOSVD during the training phase, reduced the classification time and storage costs, while not significantly affecting the classification error. The third application demonstrates the effectiveness of ST-HOSVD in compressing results from a numerical simulation of a partial differential equation. In such problems, ST-HOSVD inevitably can greatly improve the running time. We present an example wherein the 2 hour 45 minute calculation of T-HOSVD was reduced to just over one minute by ST-HOSVD, representing a speedup of 133, while even improving the memory consumption.
A stable algorithm to compute the roots of polynomials is presented. The roots are found by computing the eigenvalues of the associated companion matrix by Francis's implicitly shifted QR algorithm. A companion matrix is an upper Hessenberg matrix that is unitary-plus-rankone, that is, it is the sum of a unitary matrix and a rank-one matrix. These properties are preserved by iterations of Francis's algorithm, and it is these properties that are exploited here. The matrix is represented as a product of 3n − 1 Givens rotators plus the rank-one part, so only O(n) storage space is required. In fact, the information about the rank-one part is also encoded in the rotators, so it is not necessary to store the rank-one part explicitly. Francis's algorithm implemented on this representation requires only O(n) flops per iteration and thus O(n 2 ) flops overall. The algorithm is described, normwise backward stability is proved, and an extensive set of numerical experiments is presented. The algorithm is shown to be about as accurate as the (slow) Francis QR algorithm applied to the companion matrix without exploiting the structure. It is faster than other fast methods that have been proposed, and its accuracy is comparable or better.
We present a framework for the construction of linearizations for scalar and matrix polynomials based on dual bases which, in the case of orthogonal polynomials, can be described by the associated recurrence relations. The framework provides an extension of the classical linearization theory for polynomials expressed in non-monomial bases and allows to represent polynomials expressed in product families, that is as a linear combination of elements of the form φ i (λ)ψ j (λ), where {φ i (λ)} and {ψ j (λ)} can either be polynomial bases or polynomial families which satisfy some mild assumptions.We show that this general construction can be used for many different purposes. Among them, we show how to linearize sums of polynomials and rational functions expressed in different bases. As an example, this allows to look for intersections of functions interpolated on different nodes without converting them to the same basis.We then provide some constructions for structured linearizations for ⋆-even and ⋆-palindromic matrix polynomials. The extensions of these constructions to ⋆-odd and ⋆-antipalindromic of odd degree is discussed and follows immediately from the previous results.
Currently there is a growing interest in semiseparable matrices and generalized semiseparable matrices. To gain an appreciation of the historical evolution of this concept, we present in this paper an extensive list of publications related to the field of semiseparable matrices. It is interesting to see that semiseparable matrices were investigated in different fields, e.g., integral equations, statistics, vibrational analysis, independently of-each other. Also notable is the fact that leading statisticians at that time Used semiseparable matrices without knowing their inverses to be tridiagonal. During this historical evolution the definition of semiseparable matrices has always been a difficult point leading to misunderstandings; sometimes they were defined as the inverses of irreducible tridiagonal matrices leading to generator representable matrices, while in other cases they were defined as matrices having low rank blocks below the diagonal.
In this paper the definition of semiseparable matrices is investigated. Properties of the frequently used definition and the corresponding representation by generators are deduced. Corresponding to the class of tridiagonal matrices another definition of semiseparable matrices is introduced preserving the nice properties dual to the class of tridiagonal matrices. Several theorems and properties are included showing the viability of this alternative definition.Because of the alternative definition, the standard representation of semiseparable matrices is not satisfying anymore. The concept of a representation is explicitely formulated and a new kind of representation corresponding to the alternative definition is given. It is proved that this representation keeps all the interesting properties of the generator representation.As an example of the effectivity of the new representation, we design on O(n) algorithm for the multiplication of a semiseparable matrix given by the new representation, with a vector. Because of the alternative definition, the standard representation of semiseparable matrices is not satisfying anymore. The concept of a representation is explicitely formulated and a new kind of representation corresponding to the alternative definition is given. It is proved that this representation keeps all the interesting properties of the generator representation.As an example of the effectivity of the new representation, we design on O(n) algorithm for the multiplication of a semiseparable matrix given by the new representation, with a vector.
It is well known how any symmetric matrix can be reduced by an orthogonal similarity transformation into tridiagonal form. Once the tridiagonal matrix has been computed, several algorithms can be used to compute either the whole spectrum or part of it. In this paper, we propose an algorithm to reduce any symmetric matrix into a similar semiseparable one of semiseparability rank 1, by orthogonal similarity transformations. It turns out that partial execution of this algorithm computes a semiseparable matrix whose eigenvalues are the Ritz-values obtained by the Lanczos' process applied to the original matrix. Moreover, it is shown that at the same time a type of nested subspace iteration is performed. These properties allow to design different algorithms to compute the whole or part of the spectrum. Numerical experiments illustrate the properties of the new algorithm.
When computing an average of positive definite (PD) matrices, the preservation of additional matrix structure is desirable for interpretations in applications. An interesting and widely present structure is that of PD Toeplitz matrices, which we endow with a geometry originating in signal processing theory. As an averaging operation, we consider the barycenter, or minimizer of the sum of squared intrinsic distances. The resulting barycenter, the Kähler mean, is discussed along with its origin. Also, a generalization of the mean towards PD (Toeplitz-Block) Block-Toeplitz matrices is discussed. For PD Toeplitz-Block Block-Toeplitz matrices, we derive the generalized barycenter, or generalized Kähler mean, and a greedy approximation. This approximation is shown to be close to the generalized mean with a significantly lower computational cost.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.