2008
DOI: 10.1016/j.patcog.2007.11.023
|View full text |Cite
|
Sign up to set email alerts
|

Metric learning by discriminant neighborhood embedding

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
5
0

Year Published

2008
2008
2021
2021

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 12 publications
(5 citation statements)
references
References 11 publications
0
5
0
Order By: Relevance
“…Moreover, bio-inspired extensions of max-margin approaches for local learning have been proposed [55,61,162]. It is worth mentioning that the idea of augmenting unsupervised methods with label information has been explored in discriminative clustering [113,125], discriminative subspace learning [14,17,148,200], discriminative sparse coding [159,160,243], discriminative ICA [4,30,58], and discriminative manifold learning [40,54,76,150,196,235,249].…”
Section: Multi-view Learning Modelsmentioning
confidence: 99%
“…Moreover, bio-inspired extensions of max-margin approaches for local learning have been proposed [55,61,162]. It is worth mentioning that the idea of augmenting unsupervised methods with label information has been explored in discriminative clustering [113,125], discriminative subspace learning [14,17,148,200], discriminative sparse coding [159,160,243], discriminative ICA [4,30,58], and discriminative manifold learning [40,54,76,150,196,235,249].…”
Section: Multi-view Learning Modelsmentioning
confidence: 99%
“…We still need to define the weights w ij , which encode the class label information. Binary values ±1 are commonly used to assign w ij based on whether x i and x j come from the same class or not [23], which essentially has the same effect as partitioning the adjacency graph into a within-class graph and a between-class graph [22,29,30] to impose a pairwise constraint that the similarity should be high between samples from the same class and low otherwise. For kNN classifiers, as suggested in [11], we care more about the relative similarity defined over a triplet of samples; i.e., a higher similarity score should be assigned when a sample x i is compared with any of its target neighbor x j , j∈N + i ∨ i∈N + j than with any of its imposter neighbor x l , l∈N − i ∨ i∈N − l .…”
Section: Similarity Learning Using Neighborhood Embeddingmentioning
confidence: 99%
“…In this paper, we propose a new similarity learning algorithm that features good scalability with respect to both sample size and dimensionality. First, motivated by the findings from manifold learning with neighborhood embedding [22,23], we restrict similarity comparison to sample pairs within the same local neighborhood, and try to capture the discriminative structure of local data manifold using large margin neighborhood embedding (Sec. 2).…”
Section: Introductionmentioning
confidence: 99%
“…Distance metric learning plays a crucial role in metric-related pattern recognition tasks including K-means, K-Nearest Neighbors, and kernel-based algorithms such as SVMs [19,4,5,23,25]. The learning task falls into two categories: unsupervised and supervised distance metric learning.…”
Section: Introductionmentioning
confidence: 99%