As long as a square nonnegative matrix A contains sufficient nonzero elements, then the matrix can be balanced, that is we can find a diagonal scaling of A that is doubly stochastic. A number of algorithms have been proposed to achieve the balancing, the most well known of these being Sinkhorn-Knopp. In this paper we derive new algorithms based on inner-outer iteration schemes. We show that Sinkhorn-Knopp belongs to this family, but other members can converge much more quickly. In particular, we show that while stationary iterative methods offer little or no improvement in many cases, a scheme using a preconditioned conjugate gradient method as the inner iteration can give quadratic convergence at low cost. Introduction.For at least 70 years, scientists in a wide variety of disciplines have attempted to transform square nonnegative matrices into doubly stochastic form by applying diagonal scalings. That is, given A ∈ R n×n , A ≥ 0, find diagonal matrices D 1 and D 2 so that P = D 1 AD 2 is doubly stochastic. Motivations for achieving this balance include interpreting economic data [1], preconditioning sparse matrices [16], understanding traffic circulation [14], assigning seats fairly after elections [3], matching protein samples [4] and ordering nodes in a graph [12]. In all of these applications, one of the main methods considered is SK 1 . This is an iterative process that attempts to find D 1 and D 2 by alternately normalising columns and rows in a sequence of matrices starting with A. Convergence conditions for this algorithm are well known: if A has total support 2 then the algorithm will converge linearly with asymptotic rate equal to the square of the subdominant singular value of P [22,23,12].Clearly, in some cases the convergence will be painfully slow. The principal aim of this paper is to derive some new algorithms for the matrix balancing problem with an eye on speed, especially for large systems. First we look at a simple Newton method for symmetric matrices, closely related to a method proposed by Khachiyan and Kalantari [11] for positive definite (but not necessarily nonnegative) matrices. We will show that as long as Newton's method produces a sequence of positive iterates, the Jacobians we generate will be positive semi-definite and that this is also true when we adapt the method to cope with nonsymmetric matrices.To apply Newton's method exactly we require a linear system solve at each step, and this is usually prohibitively expensive. We therefore look at iterative techniques for approximating the solution at each step. First we look at splitting methods and we see that SK is a member of this family of methods, as is the algorithm proposed by Livne and Golub in [16]. We give an asymptotic bound on the (linear) rate of convergence of these methods. For symmetric matrices we can get significant improvement *
International audienceWe present an iterative algorithm which asymptotically scales the $\infty$-norm of each row and each column of a matrix to one. This scaling algorithm preserves symmetry of the original matrix and shows fast linear convergence with an asymptotic rate of $1/2$. We discuss extensions of the algorithm to the one-norm, and by inference to other norms. For the 1-norm case, we show again that convergence is linear, with the rate dependent on the spectrum of the scaled matrix. We demonstrate experimentally that the scaling algorithm improves the conditioning of the matrix and that it helps direct solvers by reducing the need for pivoting. In particular, for symmetric matrices the theoretical and experimental results highlight the potential of the proposed algorithm over existing alternatives.Nous décrivons un algorithme itératif qui, asymptotiquement, met une matrice à l'échelle de telle sorte que chaque ligne et chaque colonne est de taille 1 dans la norme infini. Cet algorithme préserve la symétrie. De plus, il converge assez rapidement avec un taux asymptotique de 1/2. Nous discutons la généralisation de l'algorithme à la norme 1 et, par inférence, à d'autres normes. Pour le cas de la norme 1, nous établissons que l'algorithme converge avec un taux linéaire. Nous démontrons expérimentalement que notre algorithme améliore le conditionnement de la matrice et qu'il aide les méthodes directes de résolution en réduisant le pivotage. Particulièrement pour des matrices symétriques, nos résultats théoriques et expérimentaux mettent en valeur l'intérêt de notre algorithme par rapport aux algorithmes existants
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.