2020
DOI: 10.1109/tpami.2019.2914899
|View full text |Cite
|
Sign up to set email alerts
|

Learning Local Metrics and Influential Regions for Classification

Abstract: The performance of distance-based classifiers heavily depends on the underlying distance metric, so it is valuable to learn a suitable metric from the data. To address the problem of multimodality, it is desirable to learn local metrics. In this short paper, we define a new intuitive distance with local metrics and influential regions, and subsequently propose a novel local metric learning method for distancebased classification. Our key intuition is to partition the metric space into influential regions and a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
18
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 20 publications
(18 citation statements)
references
References 22 publications
0
18
0
Order By: Relevance
“…The above setup resulted in problem sizes from 29 to 400. We applied the following two data normalization schemes for the training/test data: i) a standardization scheme in [41] that first subtracts the mean and divides by the feature-wise standard deviation, and then normalizes to unit length sample-wise, and ii) a min-max scheme [40] that rescales each feature to within 0 and 1. We added 10 −12 noise to the dataset to avoid NaN's due to data normalization on small samples.…”
Section: Methodsmentioning
confidence: 99%
“…The above setup resulted in problem sizes from 29 to 400. We applied the following two data normalization schemes for the training/test data: i) a standardization scheme in [41] that first subtracts the mean and divides by the feature-wise standard deviation, and then normalizes to unit length sample-wise, and ii) a min-max scheme [40] that rescales each feature to within 0 and 1. We added 10 −12 noise to the dataset to avoid NaN's due to data normalization on small samples.…”
Section: Methodsmentioning
confidence: 99%
“…Definition 2: [37] Let (X , d X ) and (Y, d Y ) be two metric spaces, the Lipschitz constant of a function f is defined as:…”
Section: Theoretical Analysis On the Inseparable Problem Of Metric Le...mentioning
confidence: 99%
“…where d is the rank of L [37]. Since 1 sm does not change the learning ability of f θ (x), the Lipschitz constant of linear metric learning has a very small upperbound.…”
Section: Theoretical Analysis On the Inseparable Problem Of Metric Le...mentioning
confidence: 99%
“…In this section, we introduce a new type of decision function of K-NN classifier proposed recently by [4]. This new type of K-NN classifier is started from the two-class case, i.e., to judge whether a sample belongs to a specially-pointed class.…”
Section: A Decision Function Of K-nn Classifiermentioning
confidence: 99%
“…For example, the cross-entropy loss widely used in deep learning classifiers comes from the traditional softmax regression [1], [2], [3]. The other side is that traditional classifiers are still valuable on the task with small data set [4], [5].…”
Section: Introductionmentioning
confidence: 99%