The platform will undergo maintenance on Sep 14 at about 9:30 AM EST and will be unavailable for approximately 1 hour.
2017
DOI: 10.1016/j.laa.2016.09.035
|View full text |Cite
|
Sign up to set email alerts
|

New studies of randomized augmentation and additive preprocessing

Abstract: Abstract• A standard Gaussian random matrix has full rank with probability 1 and is wellconditioned with a probability quite close to 1 and converging to 1 fast as the matrix deviates from square shape and becomes more rectangular.• If we append sufficiently many standard Gaussian random rows or columns to any matrix A, such that ||A|| = 1, then the augmented matrix has full rank with probability 1 and is well-conditioned with a probability close to 1, even if the matrix A is rank deficient or ill-conditioned.… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

2
4
0

Year Published

2017
2017
2020
2020

Publication Types

Select...
4
2
2

Relationship

5
3

Authors

Journals

citations
Cited by 8 publications
(6 citation statements)
references
References 30 publications
2
4
0
Order By: Relevance
“…This continues our earlier study of dual problems of matrix computations with random input, in particular Gaussian elimination where randomization replaces pivoting (see [PQY15], [PZ17a], and [PZ17b]). We further advance this approach in [PLSZa], [PLa], [PLb], and [LPSa].…”
Section: Introductionsupporting
confidence: 66%
“…This continues our earlier study of dual problems of matrix computations with random input, in particular Gaussian elimination where randomization replaces pivoting (see [PQY15], [PZ17a], and [PZ17b]). We further advance this approach in [PLSZa], [PLa], [PLb], and [LPSa].…”
Section: Introductionsupporting
confidence: 66%
“…A natural research challenge is the combination of our randomized multiplicative preprocessing with randomized augmentation and additive preprocessing, studied in [28,29,31,32,38,35,37]. under some mild assumptions on the positive oversampling integer p. The above bounds show that low-rank approximations of high quality can be obtained by using a reasonably small oversampling integer parameter p, say p = 20, but they do not apply where p ≤ 1.…”
Section: Discussionmentioning
confidence: 99%
“…[PLSZ16] and [PLSZ17] were the first papers that provided formal support for dual accurate randomized LRA computations performed at sub-linear cost (in these papers such computations are called superfast). The earlier papers [PQY15], [PLSZ16], [PZ17a], and [PZ17b] studied duality for other fundamental matrix computations besides LRA, while the paper [PLb] has extended our study to a sub-linear cost dual algorithm for the popular problem of Linear Least Squares Regression and confirmed accuracy of this solution by the results of numerical experiments.…”
mentioning
confidence: 54%
“…Impact of our study, its extensions and by-products: (i) Our duality approach enables new insight into some fundamental matrix computations besides LRA: [PQY15], [PZ17a], and [PZ17b] provide formal support for empirical efficiency of dual Gaussian elimination with no pivoting, while [PLb] proposes a sub-linear cost modification of Sarlós' algorithm of 2006 and then proves that whp it outputs nearly optimal solution of the highly important problem of Linear Least Squares Regression (LLSR) provided that its input is random. Then again this formal proof is in good accordance with the test results.…”
mentioning
confidence: 99%