DOI: 10.1007/978-3-540-85363-3_40
|View full text |Cite
|
Sign up to set email alerts
|

Dense Fast Random Projections and Lean Walsh Transforms

Abstract: Random projection methods give distributions over k × d matrices such that if a matrix Ψ (chosen according to the distribution) is applied to a finite set of vectors xi ∈ R d the resulting vectors Ψxi ∈ R k approximately preserve the original metric with constant probability. First, we show that any matrix (composed with a random ±1 diagonal matrix) is a good random projector for a subset of vectors inSecond, we describe a family of tensor product matrices which we term Lean Walsh. We show that using Lean Wals… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
11
0

Publication Types

Select...
3
3
3

Relationship

0
9

Authors

Journals

citations
Cited by 27 publications
(12 citation statements)
references
References 25 publications
(22 reference statements)
1
11
0
Order By: Relevance
“…Also, as defined here, n is a power of 2, but variants of this construction exist for other values of n.) Importantly, applying the randomized Hadamard transform, i.e., computing the product xDH for any vector x ∈ R n takes O(n log n) time (or even O(n log r) time if only r elements in the transformed vector need to be accessed). Applying such a structured random projection was first proposed in [71,72], it was first applied in the context of randomized matrix algorithms in [80,81], and there has been a great deal of research in recent years on variants of this basic structured random projection that are better in theory or in practice [73,81,82,83,84,85,1,86,87,88,89,90]. For example, one could choose Ω = DHS, where S is a random sampling matrix, as defined above, that represents the operation of uniformly sampling a small number of columns from the randomized Hadamard transform.…”
Section: Random Sampling and Random Projectionsmentioning
confidence: 99%
“…Also, as defined here, n is a power of 2, but variants of this construction exist for other values of n.) Importantly, applying the randomized Hadamard transform, i.e., computing the product xDH for any vector x ∈ R n takes O(n log n) time (or even O(n log r) time if only r elements in the transformed vector need to be accessed). Applying such a structured random projection was first proposed in [71,72], it was first applied in the context of randomized matrix algorithms in [80,81], and there has been a great deal of research in recent years on variants of this basic structured random projection that are better in theory or in practice [73,81,82,83,84,85,1,86,87,88,89,90]. For example, one could choose Ω = DHS, where S is a random sampling matrix, as defined above, that represents the operation of uniformly sampling a small number of columns from the randomized Hadamard transform.…”
Section: Random Sampling and Random Projectionsmentioning
confidence: 99%
“…matrices where it takes at most o(n log n) time to implement the multiplication. Please see the constructions in [1,10,16,19,20] as well as the more recent papers [2,23] for further details on related and improved constructions.…”
Section: Introductionmentioning
confidence: 99%
“…Before getting into details of the proof, we first introduce the following two lemmas from [16] and [27]:…”
Section: Preserving Inner Productmentioning
confidence: 99%