Abstract:Abstract• A standard Gaussian random matrix has full rank with probability 1 and is wellconditioned with a probability quite close to 1 and converging to 1 fast as the matrix deviates from square shape and becomes more rectangular.• If we append sufficiently many standard Gaussian random rows or columns to any matrix A, such that ||A|| = 1, then the augmented matrix has full rank with probability 1 and is well-conditioned with a probability close to 1, even if the matrix A is rank deficient or ill-conditioned.… Show more
“…This continues our earlier study of dual problems of matrix computations with random input, in particular Gaussian elimination where randomization replaces pivoting (see [PQY15], [PZ17a], and [PZ17b]). We further advance this approach in [PLSZa], [PLa], [PLb], and [LPSa].…”
Low rank approximation of a matrix (hereafter LRA) is a highly important area of Numerical Linear and Multilinear Algebra and Data Mining and Analysis with numerous important applications to modern computations. One can operate with LRA of a matrix at sub-linear cost, that is, by using much fewer memory cells and flops than the matrix has entries, 1 but no sub-linear cost algorithm can compute accurate LRA of the worst case input matrices or even of the matrices of small families of low rank matrices in our Appendix B. Nevertheless we prove that some old and new sub-linear cost algorithms can solve the dual LRA problem, that is, with a high probability (hereafter whp) compute close LRA of a random matrix admitting LRA. Our tests are in good accordance with our formal study, and we have extended our progress into various directions, in particular to dual Linear Least Squares Regression at sub-linear cost.
“…This continues our earlier study of dual problems of matrix computations with random input, in particular Gaussian elimination where randomization replaces pivoting (see [PQY15], [PZ17a], and [PZ17b]). We further advance this approach in [PLSZa], [PLa], [PLb], and [LPSa].…”
Low rank approximation of a matrix (hereafter LRA) is a highly important area of Numerical Linear and Multilinear Algebra and Data Mining and Analysis with numerous important applications to modern computations. One can operate with LRA of a matrix at sub-linear cost, that is, by using much fewer memory cells and flops than the matrix has entries, 1 but no sub-linear cost algorithm can compute accurate LRA of the worst case input matrices or even of the matrices of small families of low rank matrices in our Appendix B. Nevertheless we prove that some old and new sub-linear cost algorithms can solve the dual LRA problem, that is, with a high probability (hereafter whp) compute close LRA of a random matrix admitting LRA. Our tests are in good accordance with our formal study, and we have extended our progress into various directions, in particular to dual Linear Least Squares Regression at sub-linear cost.
“…A natural research challenge is the combination of our randomized multiplicative preprocessing with randomized augmentation and additive preprocessing, studied in [28,29,31,32,38,35,37]. under some mild assumptions on the positive oversampling integer p. The above bounds show that low-rank approximations of high quality can be obtained by using a reasonably small oversampling integer parameter p, say p = 20, but they do not apply where p ≤ 1.…”
We study two applications of standard Gaussian random multipliers. At first we prove that with a probability close to 1 such a multiplier is expected to numerically stabilize Gaussian elimination with no pivoting as well as block Gaussian elimination. Then, by extending our analysis, we prove that such a multiplier is also expected to support low-rank approximation of a matrix without customary oversampling. Our test results are in good accordance with this formal study. The results remain similar when we replace Gaussian multipliers with random circulant or Toeplitz multipliers, which involve fewer random parameters and enable faster multiplication. We formally support the observed efficiency of random structured multipliers applied to approximation, but we still continue our research in the case of elimination. We ✩ Some results of this paper have been presented at 203 Gaussian elimination Pivoting Block Gaussian elimination Low-rank approximation SRFT matrices Random circulant matrices specify a narrow class of unitary inputs for which Gaussian elimination with no pivoting is numerically unstable and then prove that, with a probability close to 1, a Gaussian random circulant multiplier does not fix numerical stability problems for such inputs. We also prove that the power of the random circulant preprocessing increases if we also include random permutations.
“…[PLSZ16] and [PLSZ17] were the first papers that provided formal support for dual accurate randomized LRA computations performed at sub-linear cost (in these papers such computations are called superfast). The earlier papers [PQY15], [PLSZ16], [PZ17a], and [PZ17b] studied duality for other fundamental matrix computations besides LRA, while the paper [PLb] has extended our study to a sub-linear cost dual algorithm for the popular problem of Linear Least Squares Regression and confirmed accuracy of this solution by the results of numerical experiments.…”
mentioning
confidence: 54%
“…Impact of our study, its extensions and by-products: (i) Our duality approach enables new insight into some fundamental matrix computations besides LRA: [PQY15], [PZ17a], and [PZ17b] provide formal support for empirical efficiency of dual Gaussian elimination with no pivoting, while [PLb] proposes a sub-linear cost modification of Sarlós' algorithm of 2006 and then proves that whp it outputs nearly optimal solution of the highly important problem of Linear Least Squares Regression (LLSR) provided that its input is random. Then again this formal proof is in good accordance with the test results.…”
Low Rank Approximation (LRA) of a matrix is a hot research subject, fundamental for Matrix and Tensor Computations and Big Data Mining and Analysis. Computations with LRA can be performed at sub-linear cost, that is, by using much fewer arithmetic operations and memory cells than an input matrix has entries. Although every sub-linear cost algorithm for LRA fails to approximate the worst case inputs, we prove that our sub-linear cost variations of a popular subspace sampling algorithm output accurate LRA of a large class of inputs. Namely, they do so with a high probability (hereafter whp) for a random input matrix that admits its LRA. In other papers we proposed and analyzed sub-linear cost algorithms for other important matrix computations. Our numerical tests are in good accordance with our formal results.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.