Proceedings of the Forty-Fifth Annual ACM Symposium on Theory of Computing 2013
DOI: 10.1145/2488608.2488620
|View full text |Cite
|
Sign up to set email alerts
|

Low rank approximation and regression in input sparsity time

Abstract: We design a new distribution over poly(rε −1 ) × n matrices S so that for any fixed n × d matrix A of rank r, with probability at least 9/10, SAx 2 = (1 ± ε) Ax 2 simultaneously for all x ∈ R d . Such a matrix S is called a subspace embedding. Furthermore, SA can be computed in O(nnz(A))time, where nnz(A) is the number of non-zero entries of A. This improves over all previous subspace embeddings, which required at least Ω(nd log d) time to achieve this property. We call our matrices S sparse embedding matrices… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

1
167
0

Year Published

2014
2014
2022
2022

Publication Types

Select...
4
2
2

Relationship

0
8

Authors

Journals

citations
Cited by 187 publications
(168 citation statements)
references
References 58 publications
1
167
0
Order By: Relevance
“…Meanwhile, our understanding for efficient Φ, such as the SJLT with small s, has not moved beyond the worst case. In some very specific examples of T we do have good bounds for settings of s, m that suffice, such as T the unit norm vectors in a d-dimensional subspace [CW13,MM13,NN13a], or all elements of T having small ∞ norm [Mat08,DKS10,KN10,BOR10]. However, our understanding for general T is non-existent.…”
Section: Gafa Toward a Unified Theory Of Sparse Dimensionality Reductmentioning
confidence: 99%
See 2 more Smart Citations
“…Meanwhile, our understanding for efficient Φ, such as the SJLT with small s, has not moved beyond the worst case. In some very specific examples of T we do have good bounds for settings of s, m that suffice, such as T the unit norm vectors in a d-dimensional subspace [CW13,MM13,NN13a], or all elements of T having small ∞ norm [Mat08,DKS10,KN10,BOR10]. However, our understanding for general T is non-existent.…”
Section: Gafa Toward a Unified Theory Of Sparse Dimensionality Reductmentioning
confidence: 99%
“…Often an exact solution requires computing the singular value decomposition (SVD) of A, but using OSE's the running time is reduced to that for computing ΦA, plus computing the SVD of the smaller matrix ΦA. The work [CW13] showed s = 1 with small m is sufficient, yielding algorithms for least squares regression and low-rank approximation whose running times are linear in the number of non-zero entries in A for sufficiently lopsided rectangular matrices.…”
Section: Applicationsmentioning
confidence: 99%
See 1 more Smart Citation
“…We shall focus on sketching based algorithms in the tradition of Sarlos, 11 which use a small number of random linear combinations of columns and rows to approximate the SVD. [12][13][14] This further compression of the statistics matrix drives down the space complexity of the HMM inference problem at the expense of some small amount of error, which can be bounded analytically at the expense of some additional time and space. As the matrix is an approximation in the first place, this is likely not too great a cost.…”
Section: Introductionmentioning
confidence: 99%
“…Dimensionality reduction and feature selection has been attracting a significant amount of attention [8], [16]. Recently popular approaches include the well-known "leverage-scores"-based approach [11], [12] and random projections (RP) [1], [3], [11], [12]. Leverage-scores-based approaches rely on the singular value decomposition of the data matrix to assign probabilities to each feature.…”
Section: Introductionmentioning
confidence: 99%