2021
DOI: 10.1007/978-3-030-69377-0_6
|View full text |Cite
|
Sign up to set email alerts
|

Experimental Analysis of Locality Sensitive Hashing Techniques for High-Dimensional Approximate Nearest Neighbor Searches

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
3
1

Relationship

1
9

Authors

Journals

citations
Cited by 12 publications
(9 citation statements)
references
References 15 publications
0
9
0
Order By: Relevance
“…This operation results in less disk I/Os since it prevents reading unnecessary buckets from the disk. Nevertheless, as we show in our experimental paper [49], this incremental strategy leads to high computation costs. In [74], authors extend I-LSH and introduce EI-LSH that features an aggressive early termination condition in order to stop the algorithm when good enough candidates are found and save the processing time.…”
Section: Minkowski-based Lsh Techniquesmentioning
confidence: 81%
“…This operation results in less disk I/Os since it prevents reading unnecessary buckets from the disk. Nevertheless, as we show in our experimental paper [49], this incremental strategy leads to high computation costs. In [74], authors extend I-LSH and introduce EI-LSH that features an aggressive early termination condition in order to stop the algorithm when good enough candidates are found and save the processing time.…”
Section: Minkowski-based Lsh Techniquesmentioning
confidence: 81%
“…In future work, we will explore learning and task execution with sparse activation to determine if there are critical constraints on proportion active and overlap to enable learning of the task. It is interesting in this context to note that there are several algorithms that interpret the input circuits of the MB as locality-sensitive hashing [71] by which high-dimensional input can be mapped into the state represented as a sparse number of active KCs.…”
Section: Multi-node Activationmentioning
confidence: 99%
“…The original high-dimensional space is mapped to the projected lowdimensional space using random hash projections in LSH. There are two key benefits of LSH: its sub-linear query performance (in terms of the data quantity) and the theoretical assurances of query accuracy [29]. The LSH family H is a probability P distribution on a family H of hash functions h, where the similarity function S is established on the set of objects X and Y [30,31].…”
Section: Locality-sensitive Hashing Techniquementioning
confidence: 99%