2010
DOI: 10.1007/s00454-010-9309-5
|View full text |Cite
|
Sign up to set email alerts
|

Dense Fast Random Projections and Lean Walsh Transforms

Abstract: Random projection methods give distributions over k × d matrices such that if a matrix Ψ (chosen according to the distribution) is applied to a finite set of vectors xi ∈ R d the resulting vectors Ψxi ∈ R k approximately preserve the original metric with constant probability. First, we show that any matrix (composed with a random ±1 diagonal matrix) is a good random projector for a subset of vectors in R d . Second, we describe a family of tensor product matrices which we term Lean Walsh. We show that using Le… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
30
0

Year Published

2011
2011
2019
2019

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 22 publications
(31 citation statements)
references
References 22 publications
(28 reference statements)
1
30
0
Order By: Relevance
“…It turns out that the preconditioning transforms also satisfy the metric preservation properties of the Johnson-Lindenstrauss lemma and can be applied quickly, but do not reduce the dimension of the data. Ailon and Liberty [18] proposed the use of non-square Hadamard-like preconditioners (the socalled lean Walsh matrices) to do the preconditioning and dimensionality reduction in the same step.…”
Section: Combining Projections With Precondition-mentioning
confidence: 99%
See 1 more Smart Citation
“…It turns out that the preconditioning transforms also satisfy the metric preservation properties of the Johnson-Lindenstrauss lemma and can be applied quickly, but do not reduce the dimension of the data. Ailon and Liberty [18] proposed the use of non-square Hadamard-like preconditioners (the socalled lean Walsh matrices) to do the preconditioning and dimensionality reduction in the same step.…”
Section: Combining Projections With Precondition-mentioning
confidence: 99%
“…While the original proof of the JL Lemma uses measure concentration on the sphere, as well as the Brunn-Minkowski inequality [20], the lemma has been reproved and improved extensively [17,18,21,1,15,8,4,3,2,13,8,9]. Early work showed that the projections could be constructed from simple random matrices, with newer results simplifying the distributions that projections are sampled from, improving the running time of the mapping, and even derandomizing the construction.…”
Section: Introductionmentioning
confidence: 99%
“…As noted in Sec. II-C, the projection scheme currently requires O ( dk ), but it has been suggested effective projection can be carried out in O ( d ) operations [38]. …”
Section: Experiments and Resultsmentioning
confidence: 99%
“…The random projection can be performed by constructing a k × d random matrix; by employing this approach, projection of each point requires a matrix-vector multiply taking O ( dk ) operations. Recent theoretic work suggests that even more efficient projections are possible, with [38] proposing an algorithm to project from dimension d to dimension k with O ( d ) operations.…”
Section: Methodsmentioning
confidence: 99%
“…Following this proof, given the initial set of n points in R d , represented as an n × d matrix, where each feature-patch is represented by a row, let R be a d × k random matrix with R ( i, j ) = r ij ; where the independent random variables r ij are: {1 with probability 0.5, and −1 with probability 0.5}. Naively, the random projection can be performed by constructing a k × d random matrix; so that mapping each point takes O ( dk ), however recent theoretic work suggests that a projection from dimension d to dimension k can be computed with O ( d ) operations [11]. …”
Section: Methodsmentioning
confidence: 99%