2006 47th Annual IEEE Symposium on Foundations of Computer Science (FOCS'06) 2006
DOI: 10.1109/focs.2006.37
|View full text |Cite
|
Sign up to set email alerts
|

Improved Approximation Algorithms for Large Matrices via Random Projections

Abstract: Recently several results appeared that show significant reduction in time for matrix multiplication, singular value decomposition as well as linear ( 2 ) regression, all based on data dependent random sampling. Our key idea is that low dimensional embeddings can be used to eliminate data dependence and provide more versatile, linear time pass efficient matrix computation. Our main contribution is summarized as follows. • Independent of the recent results of Har-Peled and of Deshpande and Vempala, one of the fi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

8
740
0

Year Published

2010
2010
2020
2020

Publication Types

Select...
7
3

Relationship

0
10

Authors

Journals

citations
Cited by 562 publications
(756 citation statements)
references
References 44 publications
(96 reference statements)
8
740
0
Order By: Relevance
“…Other constructions of random projection matrices have been discovered since [2,3,4,5,6]. Their properties make random projections a key player in rank-k approximation algorithms [7,8,9,10,11,12,13,14], other algorithms in numerical linear algebra [15,16,17], compressed sensing [18,19,20], and various other applications, e.g, [21,22].…”
Section: Introductionmentioning
confidence: 99%
“…Other constructions of random projection matrices have been discovered since [2,3,4,5,6]. Their properties make random projections a key player in rank-k approximation algorithms [7,8,9,10,11,12,13,14], other algorithms in numerical linear algebra [15,16,17], compressed sensing [18,19,20], and various other applications, e.g, [21,22].…”
Section: Introductionmentioning
confidence: 99%
“…Other suggested methods as [1,7,38,39] seem to have the same complexity O(kmn), since they project each row of A on some k-dimensional subspace of R n . Clearly, a best CUR approximation is chosen by the least squares principle:…”
Section: Low Rank Approximations Using Sampling Of Rowsmentioning
confidence: 99%
“…In the case of "power-law" networks it was shown in [32] that the spectral counting of triangles can be efficient due to their special spectral properties and [33] extended this idea using the randomized algorithm by [12] by proposing a simple biased node sampling. This algorithm can be viewed as a special case of a streaming algorithm, since there exist algorithms, e.g., [29], that perform a constant number of passes over the non-zero elements of the matrix to produce a good low rank matrix approximation. In [5] the semi-streaming model for counting triangles is introduced, which allows log n passes over the edges.…”
Section: Existing Workmentioning
confidence: 99%