2008
DOI: 10.1002/rsa.20218
|View full text |Cite
|
Sign up to set email alerts
|

On variants of the Johnson–Lindenstrauss lemma

Abstract: ABSTRACT:The Johnson-Lindenstrauss lemma asserts that an n-point set in any Euclidean space can be mapped to a Euclidean space of dimension k = O(ε −2 log n) so that all distances are preserved up to a multiplicative factor between 1 − ε and 1 + ε. Known proofs obtain such a mapping as a linear map R n → R k with a suitable random matrix. We give a simple and self-contained proof of a version of the Johnson-Lindenstrauss lemma that subsumes a basic versions by Indyk and Motwani and a version more suitable for … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

5
206
1
1

Year Published

2009
2009
2021
2021

Publication Types

Select...
5
2
2

Relationship

0
9

Authors

Journals

citations
Cited by 187 publications
(213 citation statements)
references
References 14 publications
(33 reference statements)
5
206
1
1
Order By: Relevance
“…Among its many known variants (see [4], [6], [8], [13]), we use the following version originally proven in [1], [4] 1 .…”
Section: Introductionmentioning
confidence: 99%
“…Among its many known variants (see [4], [6], [8], [13]), we use the following version originally proven in [1], [4] 1 .…”
Section: Introductionmentioning
confidence: 99%
“…Here, the target dimension k is much smaller than the original dimension d. The name random projections was coined after the first construction by Johnson and Lindenstrauss in [1] who showed that such mappings exist for k ∈ O(log(1/δ)/ε 2 ). Other constructions of random projection matrices have been discovered since [2,3,4,5,6]. Their properties make random projections a key player in rank-k approximation algorithms [7,8,9,10,11,12,13,14], other algorithms in numerical linear algebra [15,16,17], compressed sensing [18,19,20], and various other applications, e.g, [21,22].…”
Section: Introductionmentioning
confidence: 99%
“…entries such as in [1,2,3,4,5,6] would guarantee the desired low distortion property for the entire space of matrix, D s . 4 In their analysis they show that Ψ = AD s is a good random projection for vectors x with bounded 4 norm.…”
Section: Introductionmentioning
confidence: 99%
“…Roughly speaking, since the average number of nonzero entries of the matrix P is just O(log 2 N ), FJLT is a fast scheme because there is a significant reduction of the amount of computation of P. In [7], J. Matousek shown that it is possible to replace the Gaussian distribution N (0, q − 1) by Bernoulli (±1) distribution without incurring the dimensionality penalty, further speeding up the computation. Then, in [8], D. Ailon et al showed a simpler variant of FJLT by replacing a sparse random matrix P by a deterministic 4-wise independent code matrix (e.g.…”
Section: Introductionmentioning
confidence: 99%