2013
DOI: 10.1007/978-3-642-40991-2_43
|View full text |Cite
|
Sign up to set email alerts
|

Mixtures of Large Margin Nearest Neighbor Classifiers

Abstract: Abstract. The accuracy of the k-nearest neighbor algorithm depends on the distance function used to measure similarity between instances. Methods have been proposed in the literature to learn a good distance function from a labelled training set. One such method is the large margin nearest neighbor classifier that learns a global Mahalanobis distance. We propose a mixture of such classifiers where a gating function divides the input space into regions and a separate distance function is learned in each region … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2014
2014
2019
2019

Publication Types

Select...
2
1
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(4 citation statements)
references
References 14 publications
0
4
0
Order By: Relevance
“…Since our method learns several Cayley-Klein metrics locally and combines them together for a global and powerful distance metric, it is mostly related to the local metric learning [ 11 , 13 , 15 , 32 ] and some mixed/compositional metric learning methods [ 16 , 33 ]. MM-LMNN [ 11 ] is an extension of LMNN which learns a small number of metrics (typically one per class) in an effort to alleviate overfitting.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…Since our method learns several Cayley-Klein metrics locally and combines them together for a global and powerful distance metric, it is mostly related to the local metric learning [ 11 , 13 , 15 , 32 ] and some mixed/compositional metric learning methods [ 16 , 33 ]. MM-LMNN [ 11 ] is an extension of LMNN which learns a small number of metrics (typically one per class) in an effort to alleviate overfitting.…”
Section: Related Workmentioning
confidence: 99%
“…Additionally, a nuclear norm regularizer is adopted to obtain low-rank weight matrices for calculating metrics, which is able to control the rank of the involved linear mappings through a sparsity-inducing matrix norm. Recently, Semerci and Alpaydin [ 16 ] proposed the Mixture of LMNN (MoLMNN) method to learn a mixture of local Mahalanobis distances to better discriminate the data. It needs a gating function to softly partition the input space into several regions.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…In [26], Wang et al optimize a combination of metric bases that are learned for some anchor points defined as the means of clusters constructed, for example, by the K-Means algorithm. Other local metric learning algorithms have been recently proposed, only in a classification setting, such as [33] which makes use of random forests and absolute position of points to compute a local metric; in [10], a local metric is learned based on a conical combination of Mahalanobis metrics and pair-wise similarities between the data; a last example of this non exhaustive list comes from [21], where the authors learn a mixture of local Mahalanobis distances.…”
Section: Metric Learningmentioning
confidence: 99%