2014
DOI: 10.1109/tip.2014.2359765
|View full text |Cite
|
Sign up to set email alerts
|

LogDet Divergence-Based Metric Learning With Triplet Constraints and Its Applications

Abstract: How to select and weigh features has always been a difficult problem in many image processing and pattern recognition applications. A data-dependent distance measure can address this problem to a certain extent, and therefore an accurate and efficient metric learning becomes necessary. In this paper, we propose a LogDet divergence-based metric learning with triplet constraints (LDMLT) approach, which can learn Mahalanobis distance metric accurately and efficiently. First of all, we demonstrate the good propert… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2

Citation Types

0
27
0

Year Published

2016
2016
2021
2021

Publication Types

Select...
7

Relationship

1
6

Authors

Journals

citations
Cited by 45 publications
(27 citation statements)
references
References 19 publications
(38 reference statements)
0
27
0
Order By: Relevance
“…References [25] and [38] pointed out that triplet constraints can be derived from pairwise constraints, but not vice versa. In our previous work [37], we have demonstrated that triplet constraints are weaker than the pairwise constraints theoretically. Thus, triplet constraint is the weakest one as well as the most natural constraint of these three constraints.…”
Section: Logdet Divergence-based Metric Learningmentioning
confidence: 88%
See 2 more Smart Citations
“…References [25] and [38] pointed out that triplet constraints can be derived from pairwise constraints, but not vice versa. In our previous work [37], we have demonstrated that triplet constraints are weaker than the pairwise constraints theoretically. Thus, triplet constraint is the weakest one as well as the most natural constraint of these three constraints.…”
Section: Logdet Divergence-based Metric Learningmentioning
confidence: 88%
“…If two instances belongs to the same category, the pair is labeled as similarity and their target Mahalanobis distance should be smaller than a desired superior limit; if not, the pair will be labeled as dissimilarity and their target Mahalanobis distance should be larger than a desired lower limit. Although pairwise labels are weaker than class labels [35], some of the constraints are needless [37]. The pairwise label still has some limitations in practical applications.…”
Section: Logdet Divergence-based Metric Learningmentioning
confidence: 99%
See 1 more Smart Citation
“…Similarity measurement is a fundamental problem in machine learning and computer vision domain [6], [24], [25], [26], [27], [28], [29], [30]. In comparison with traditional fixed metrics, e.g., euclidean and Mahalanobis distance, metric learning [1], [2], [3], [4] is the task of learning a distance function over objects.…”
Section: Related Workmentioning
confidence: 99%
“…Liu et al [40] propose an interesting work to handle the similarity metric learning with low-rank constraint for high dimensional data, where the number of training samples n is much smaller than the feature dimension d, i.e., n < < d. Mei et al [27] propose a logdet divergencebased metric learning model. In [38], [39], Shalit et al design an embedded manifold of low rank matrix for online metric learning model, and in [37], a Riemannian method is proposed to pursue a low rank positive semidefinite matrix.…”
Section: Related Workmentioning
confidence: 99%