2010
DOI: 10.21236/ada555156
|View full text |Cite
|
Sign up to set email alerts
|

A Randomized Approximate Nearest Neighbors Algorithm

Abstract: We present a randomized algorithm for the approximate nearest neighbor problem in ddimensional Euclidean space. Given N points {x j } in R d , the algorithm attempts to find k nearest neighbors for each of x j , where k is a user-specified integer parameter. The algorithm is iterative, and its CPU time requirements are proportional towith T the number of iterations performed. The memory requirements of the procedure are of the order N · (d + k). A byproduct of the scheme is a data structure, permitting a rapid… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

2
26
0

Year Published

2011
2011
2020
2020

Publication Types

Select...
6
1

Relationship

2
5

Authors

Journals

citations
Cited by 26 publications
(28 citation statements)
references
References 9 publications
(21 reference statements)
2
26
0
Order By: Relevance
“…Second, we demonstrate the performance of the algorithm empirically, by running it on sets of points, generated according to the Gaussian distribution. The choice of uniform or Hamming distributions instead of Gaussian results in very similar performance (see [10] for results and details). The algorithm has been implemented in FORTRAN (Lahey 95 Linux version).…”
Section: Numerical Resultsmentioning
confidence: 94%
See 3 more Smart Citations
“…Second, we demonstrate the performance of the algorithm empirically, by running it on sets of points, generated according to the Gaussian distribution. The choice of uniform or Hamming distributions instead of Gaussian results in very similar performance (see [10] for results and details). The algorithm has been implemented in FORTRAN (Lahey 95 Linux version).…”
Section: Numerical Resultsmentioning
confidence: 94%
“…uniform distribution on the discrete set of the vertices of [0, 1] d ). For both uniform and Hamming distributions, the performance of the algorithm was very similar to that in the Gaussian case (see [10] for details).…”
mentioning
confidence: 70%
See 2 more Smart Citations
“…We note that the problem of selecting a unitary matrix uniformly at random finds application in machine learning (see [39] and the references therein). The algorithm developed by Can is similar to that developed by Jones, Osipov and Rokhlin [40] in that it alternates (partial) Hadamard matrices and diagonal matrices; the difference is that the unitary 3-design property of the Clifford group [11] provides randomness guarantees.…”
Section: Main Ideas and Discussionmentioning
confidence: 99%