2016
DOI: 10.1016/j.neucom.2015.11.016
|View full text |Cite
|
Sign up to set email alerts
|

Margin distribution explanation on metric learning for nearest neighbor classification

Abstract: The importance of metrics in machine learning and pattern recognition algorithms has led to an increasing interest for optimizing distance metrics in recent years. Most of the state-of-the-art methods focus on learning Mahalanobis distances and the learned metrics are in turn heavily used for the nearest neighbor-based classification (NN). However, until now no theoretical link has been established between the learned metrics and their performance in NN. Although some existing methods such as large-margin near… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2017
2017
2018
2018

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(1 citation statement)
references
References 20 publications
0
1
0
Order By: Relevance
“…More advanced versions of the k‐NN algorithm that are capable of improving the performance could be found by varying the voting weights, neighborhood sizes, similarity metrics, etc. Datta, Misra, and Das (); Zou, Wang, Chen, and Chen (); Pan, Wang, and Ku ().…”
Section: The K‐nn Algorithmmentioning
confidence: 99%
“…More advanced versions of the k‐NN algorithm that are capable of improving the performance could be found by varying the voting weights, neighborhood sizes, similarity metrics, etc. Datta, Misra, and Das (); Zou, Wang, Chen, and Chen (); Pan, Wang, and Ku ().…”
Section: The K‐nn Algorithmmentioning
confidence: 99%