2017
DOI: 10.48550/arxiv.1704.01460
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Comparison Based Nearest Neighbor Search

Abstract: We consider machine learning in a comparison-based setting where we are given a set of points in a metric space, but we have no access to the actual distances between the points. Instead, we can only ask an oracle whether the distance between two points i and j is smaller than the distance between the points i and k. We are concerned with data structures and algorithms to find nearest neighbors based on such comparisons. We focus on a simple yet effective algorithm that recursively splits the space by first se… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2019
2019
2021
2021

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(3 citation statements)
references
References 14 publications
0
3
0
Order By: Relevance
“…Furthermore, the results depend strongly on the expansion rate, c. Some works such as Haghiri et al [2017], Dasgupta and Sinha [2013] trade accuracy for improved dependence on c or other measures of dimension. Instead, these algorithms guarantee that x q is correctly returned with probability at least 1 − δ c,n for any q.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Furthermore, the results depend strongly on the expansion rate, c. Some works such as Haghiri et al [2017], Dasgupta and Sinha [2013] trade accuracy for improved dependence on c or other measures of dimension. Instead, these algorithms guarantee that x q is correctly returned with probability at least 1 − δ c,n for any q.…”
Section: Discussionmentioning
confidence: 99%
“…In order to quantify the sample complexity of cover trees, we require a notion of the effective dimensionality of the points X . In this paper, we will make use of the expansion constant as in [Haghiri et al, 2017, Krauthgamer and Lee, 2004. Let B(x, r) denote the ball of radius r > 0 centered at x ∈ (M , d) according to the distance measure d. The expansion constant is sensitive to the geometry in X .…”
Section: Cover Trees For Nearest Neighbor Searchmentioning
confidence: 99%
“…It is most common to classify time series data using Nearest Neighbour classification based on a relative distance (such as those returned from the measures above) [15]. In fact, for more than a decade, the NN algorithm combined with the DTW measure was extremely difficult to beat [49].…”
Section: Nearest-neighbor Approachesmentioning
confidence: 99%