2018
DOI: 10.1007/978-3-030-01231-1_17
|View full text |Cite
|
Sign up to set email alerts
|

Deep Metric Learning with Hierarchical Triplet Loss

Abstract: We present a novel hierarchical triplet loss (HTL) capable of automatically collecting informative training samples (triplets) via a defined hierarchical tree that encodes global context information. This allows us to cope with the main limitation of random sampling in training a conventional triplet loss, which is a central issue for deep metric learning. Our main contributions are two-fold. (i) we construct a hierarchical class-level tree where neighboring classes are merged recursively. The hierarchical str… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

1
247
0

Year Published

2019
2019
2020
2020

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 315 publications
(248 citation statements)
references
References 28 publications
(72 reference statements)
1
247
0
Order By: Relevance
“…First, we show in the upper part of Table 1 that HORDE significantly improves three popular baselines (contrastive loss, triplet loss and binomial deviance). These improvements allow us to claim state of the art results for single model methods on CUB with 58.3% R@1 (compared to 57.1% R@1 for HTL [5]) and second best for CARS.…”
Section: Comparison To the State-of-the-artmentioning
confidence: 97%
“…First, we show in the upper part of Table 1 that HORDE significantly improves three popular baselines (contrastive loss, triplet loss and binomial deviance). These improvements allow us to claim state of the art results for single model methods on CUB with 58.3% R@1 (compared to 57.1% R@1 for HTL [5]) and second best for CARS.…”
Section: Comparison To the State-of-the-artmentioning
confidence: 97%
“…Margin [12] takes 128 dimension of embeddings and uses ResNet50 [4] as the backbone. HTL [3] sets the dimension of embeddings to 512 and reports the state-of-the-art result on the backbone of Inception. With the large number of embeddings, it is obvious that all methods outperform existing DML methods with 64 embeddings in Table 1.…”
Section: Cub-2011mentioning
confidence: 99%
“…This also allows the metric learning process to generalize well on cross-domain datasets (table 1). (3) Inspired by [2], We propose an online anchor nearest neighbor sampling (OANNS) method which addresses the batch sampling issue for large scale training data. Our method out-performs state-of-art on both our collected fashion retrieval benchmark and significantly boosts the performance on a cross-domain benchmark even without further tuning.…”
Section: Introductionmentioning
confidence: 99%
“…The major difference between OANNS and [2] is that we save the extra computation cost of computing image vectors for all images. During batch training, we take the image vectors from the feed-forward output and update product vectors for the current batch of products.…”
mentioning
confidence: 99%
See 1 more Smart Citation