Proceedings of the Forty-Third Annual ACM Symposium on Theory of Computing 2011
DOI: 10.1145/1993636.1993736
|View full text |Cite
|
Sign up to set email alerts
|

Subspace embeddings for the L 1 -norm with applications

Abstract: We show there is a distribution over linear mappings R :, such that with arbitrarily large constant probability, forThis provides the first analogue of the ubiquitous subspace JohnsonLindenstrauss embedding for the 1-norm. Importantly, the target dimension and distortion are independent of the ambient dimension n. We give several applications of this result. First, we give a faster algorithm for computing wellconditioned bases. Our algorithm is simple, avoiding the linear programming machinery required of prev… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
125
0

Year Published

2012
2012
2024
2024

Publication Types

Select...
5
2
2

Relationship

0
9

Authors

Journals

citations
Cited by 67 publications
(126 citation statements)
references
References 48 publications
1
125
0
Order By: Relevance
“…name running time s κ Φ CT [63] O(mn 2 log n) O(n log n) O(n log n) FCT [19] O(mn log n) O(n log n) O(n 4 log 4 n) SPCT [66] nnz(A) O(n 5 log 5 n) O(n 3 log 3 n) Reciprocal Exponential [64] nnz(A) O(n log n) O(n 2 log 2 n) Sampling (FCT) [19,77] O(mn log n) O(n 13/2 log 9/2 n log(1/ǫ)/ǫ 2 ) 1 + ǫ Sampling (SPCT) [66,19,77] O(nnz(A) · log n) O(n 15/2 log 11/2 n log(1/ǫ)/ǫ 2 ) 1 + ǫ Sampling (RET) [64,77] O(nnz(A) · log n) O(n 9/2 log 5/2 n log(1/ǫ)/ǫ 2 ) 1 + ǫ Table 6: Summary of data-oblivious and data-aware ℓ 1 embeddings. Above, s denotes the embedding dimension.…”
Section: Remarkmentioning
confidence: 99%
“…name running time s κ Φ CT [63] O(mn 2 log n) O(n log n) O(n log n) FCT [19] O(mn log n) O(n log n) O(n 4 log 4 n) SPCT [66] nnz(A) O(n 5 log 5 n) O(n 3 log 3 n) Reciprocal Exponential [64] nnz(A) O(n log n) O(n 2 log 2 n) Sampling (FCT) [19,77] O(mn log n) O(n 13/2 log 9/2 n log(1/ǫ)/ǫ 2 ) 1 + ǫ Sampling (SPCT) [66,19,77] O(nnz(A) · log n) O(n 15/2 log 11/2 n log(1/ǫ)/ǫ 2 ) 1 + ǫ Sampling (RET) [64,77] O(nnz(A) · log n) O(n 9/2 log 5/2 n log(1/ǫ)/ǫ 2 ) 1 + ǫ Table 6: Summary of data-oblivious and data-aware ℓ 1 embeddings. Above, s denotes the embedding dimension.…”
Section: Remarkmentioning
confidence: 99%
“…Recently, there are a lot of progress for the ℓ p regression for the case of n ≫ d Cohen and Peng [2015], Woodruff and Zhang [2013], Meng and Mahoney [2013], Clarkson and , Clarkson et al [2016], Sohler and Woodruff [2011], Dasgupta et al [2009]. These results show various ways to find a matrix A ′ with fewer rows such that Ax p ≈ A ′ x p for all vectors x ∈ R d .…”
Section: Introductionmentioning
confidence: 99%
“…A first step was done by Woodruff and Sohler [93] who designed the first subspace embedding for 1 via Cauchy random variables. The method is in principle generalizable to using p-stable distributions and was improved in [30,77].…”
Section: Lemma 11 (Distributional Johnson-lindenstrauss Lemma) There mentioning
confidence: 99%