We study the link between the complexity of polynomial matrix multiplication and the complexity of solving other basic linear algebra problems on polynomial matrices. By polynomial matrices we mean n × n matrices in K[x] of degree bounded by d, with K a commutative field. Under the straight-line program model we show that multiplication is reducible to the problem of computing the coefficient of degree d of the determinant. Conversely, we propose algorithms for minimal approximant computation and column reduction that are based on polynomial matrix multiplication; for the determinant, the straight-line program we give also relies on matrix product over K [x] and provides an alternative to the determinant algorithm of [16,17]. We further show that all these problems can be solved in particular in O˜(n ω d) operations in K. Here the "soft O" notation O˜in-dicates some missing log(nd) factors and ω is the exponent of matrix multiplication over K.
Abstract. We present new baby steps/giant steps algorithms of asymptotically fast running time for dense matrix problems. Our algorithms compute the determinant, characteristic polynomial, Frobenius normal form and Smith normal form of a dense n × n matrix A with integer entries in (n 3.2 log A ) 1+o(1) and (n 2.697263 log A ) 1+o(1) bit operations; here A denotes the largest entry in absolute value and the exponent adjustment by "+o(1)" captures additional factors C 1 (log n) C 2 (loglog A ) C 3 for positive real constants C 1 , C 2 , C 3 . The bit complexity (n 3.2 log A ) 1+o(1) results from using the classical cubic matrix multiplication algorithm. Our algorithms are randomized, and we can certify that the output is the determinant of A in a Las Vegas fashion. The second category of problems deals with the setting where the matrix A has elements from an abstract commutative ring, that is, when no divisions in the domain of entries are possible. We present algorithms that deterministically compute the determinant, characteristic polynomial and adjoint of A with n 3.2+o(1) and O(n 2.697263 ) ring additions, subtractions and multiplications.
International audienceWe consider the problem of computing univariate polynomial matrices over afield that represent minimal solution bases for a general interpolationproblem, some forms of which are the vector M-Pad\'e approximation problem in[Van Barel and Bultheel, Numerical Algorithms 3, 1992] and the rationalinterpolation problem in [Beckermann and Labahn, SIAM J. Matrix Anal. Appl. 22,2000]. Particular instances of this problem include the bivariate interpolationsteps of Guruswami-Sudan hard-decision and K\"otter-Vardy soft-decisiondecodings of Reed-Solomon codes, the multivariate interpolation step oflist-decoding of folded Reed-Solomon codes, and Hermite-Pad\'e approximation. In the mentioned references, the problem is solved using iterative algorithmsbased on recurrence relations. Here, we discuss a fast, divide-and-conquerversion of this recurrence, taking advantage of fast matrix computations overthe scalars and over the polynomials. This new algorithm is deterministic, andfor computing shifted minimal bases of relations between $m$ vectors of size$\sigma$ it uses $O~( m^{\omega-1} (\sigma + |s|) )$ field operations, where$\omega$ is the exponent of matrix multiplication, and $|s|$ is the sum of theentries of the input shift $s$, with $\min(s) = 0$. This complexity boundimproves in particular on earlier algorithms in the case of bivariateinterpolation for soft decoding, while matching fastest existing algorithms forsimultaneous Hermite-Pad\'e approximation
We investigate the technique of storing multiple array elements in the same memory cell, with the goal of reducing the amount of memory used by an array variable. This reduction is both important and achievable during the synthesis of a dedicated processor or code generation for an architecture with a software-controlled scratchpad memory. In the former case, a smaller, less expensive circuit results; in the latter, scratchpad space is saved for other uses, other arrays most likely. The key idea is that once a schedule of operations has been determined, the schedule of references to a given location is known, and elements with disjoint lifetimes may share a single memory cell, in principle. The difficult problem is one of code generation: how does one generate memory addresses in a simple way, so as to achieve a nearly best possible reuse of memory? Previous approaches to memory reuse for arrays consider some particular affine (with modulo expressions) mapping of indices, representing the data to be stored, to memory addresses. We generalize the idea, and develop a mathematical framework based on critical integer lattices that subsumes all previous approaches and gives new insights into the problem. We place the problem in a broader mathematical context, showing its relation to real critical lattices, successive minima, and lattice basis reduction; finally, we propose and analyze various strategies for lattice-based memory allocation.
We present a new algorithm to compute the Integer Smith normal form of large sparse matrices. We reduce the computation of the Smith form to independent, and therefore parallel, computations modulo powers of word-size primes. Consequently, the algorithm does not suffer from coefficient growth. We have implemented several variants of this algorithm (elimination and/or black box techniques) since practical performance depends strongly on the memory available. Our method has proven useful in algebraic topology for the computation of the homology of some large simplicial complexes.
We compute minimal bases of solutions for a general interpolation problem, which encompasses Hermite-Padé approximation and constrained multivariate interpolation, and has applications in coding theory and security.This problem asks to find univariate polynomial relations between m vectors of size σ; these relations should have small degree with respect to an input degree shift. For an arbitrary shift, we propose an algorithm for the computation of an interpolation basis in shifted Popov normal form with a cost of O˜(m ω−1 σ) field operations, where ω is the exponent of matrix multiplication and the notation O˜(·) indicates that logarithmic terms are omitted.Earlier works, in the case of Hermite-Padé approximation [34] and in the general interpolation case [18], compute non-normalized bases. Since for arbitrary shifts such bases may have size Θ(m 2 σ), the cost bound O˜(m ω−1 σ) was feasible only with restrictive assumptions on the shift that ensure small output sizes. The question of handling arbitrary shifts with the same complexity bound was left open.To obtain the target cost for any shift, we strengthen the properties of the output bases, and of those obtained during the course of the algorithm: all the bases are computed in shifted Popov form, whose size is always O(mσ). Then, we design a divide-and-conquer scheme. We recursively reduce the initial interpolation problem to sub-problems with more convenient shifts by first computing information on the degrees of the intermediate bases.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.