2018
DOI: 10.1016/j.neucom.2018.05.089
|View full text |Cite
|
Sign up to set email alerts
|

Semi-supervised metric learning in stratified spaces via intergrating local constraints and information-theoretic non-local constraints

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
8
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
3
2
2

Relationship

3
4

Authors

Journals

citations
Cited by 9 publications
(8 citation statements)
references
References 36 publications
0
8
0
Order By: Relevance
“…The machine learning literature studies on ๐‘˜NN classifier confirm that the accuracy of this classifier is highly dependent on the underlying metric specifying the nearest neighbors [6,11]. Learning a suitably parametrized distance function from data improves the generalization ability of kNN classifier.…”
Section: Proposed Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…The machine learning literature studies on ๐‘˜NN classifier confirm that the accuracy of this classifier is highly dependent on the underlying metric specifying the nearest neighbors [6,11]. Learning a suitably parametrized distance function from data improves the generalization ability of kNN classifier.…”
Section: Proposed Methodsmentioning
confidence: 99%
“…However, the underlying distance metric used significantly affects the performance of this classifier. While ๐‘˜NN takes advantage of Euclidean distance metric, significant studies demonstrate that using a fixed distance function to measure the distance between two objects is unsuitable, especially in the high-dimensional space [11,12]. The p-norm distance metric is a generalization of Euclidean distance, and several experiments confirm that the value of p affects the performance of ๐‘˜NN.…”
Section: Introductionmentioning
confidence: 99%
“…Sometimes, metric learning is formulated as a projection of the data into a new feature space. Supervised DR related to metric learning aims at locating a low-dimensional representation that maximizes the separation of labeled data, such as neighborhood component analysis (NCA) [15], maximally collapsing metric learning (MCML) [14], LMNN [16], global and local metric learning [17], and information-theoretic metric learning [18]. Specifically, LMNN is designed to learn a Mahalanobis distance metric for the nearest neighbor graph (KNN) classification.…”
Section: Related Workmentioning
confidence: 99%
“…The second category contains some weakly supervised methods that only require pairwise constraints that reflect similarity judgements between data pairs. Information-theoretic metric learning (ITML) [14][15] is a representative method proposed to learn one distance metric through entropy theory and the Bregman optimization problem. Relevant component analysis (RCA) [16][17] is another weakly supervised method that learns a global linear transformation by exploiting only relevant constraints.…”
Section: Introductionmentioning
confidence: 99%