2017 IEEE International Conference on Computer Vision (ICCV) 2017
DOI: 10.1109/iccv.2017.309
|View full text |Cite
|
Sign up to set email alerts
|

Sampling Matters in Deep Embedding Learning

Abstract: Deep embeddings answer one simple question: How similar are two images? Learning these embeddings is the bedrock of verification, zero-shot learning, and visual search. The most prominent approaches optimize a deep convolutional network with a suitable loss function, such as contrastive loss or triplet loss. While a rich line of work focuses solely on the loss functions, we show in this paper that selecting training examples plays an equally important role. We propose distance weighted sampling, which selects … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
420
0
2

Year Published

2018
2018
2024
2024

Publication Types

Select...
5
2
2

Relationship

0
9

Authors

Journals

citations
Cited by 622 publications
(424 citation statements)
references
References 33 publications
1
420
0
2
Order By: Relevance
“…proposed SGML framework is compared with state-of-theart methods using various sampling methods [26], [61], loss functions [41] [61] and ensemble methods [1], [42], [53]. Our SGML achieves a Recall@1 of 71.9% outperforming methods such as distance-weighted [26] and position-dependent sampling [61], and loss functions such as pair-wise loss [26], triplet/quadratic loss [61] and binomial deviance loss [41]. This clearly shows the advantage of the proposed SBDL loss and its inherent property of treating training samples based on their degree of information.…”
Section: Results: Deepfashion Dataset 1) Comparison With State-of-mentioning
confidence: 99%
“…proposed SGML framework is compared with state-of-theart methods using various sampling methods [26], [61], loss functions [41] [61] and ensemble methods [1], [42], [53]. Our SGML achieves a Recall@1 of 71.9% outperforming methods such as distance-weighted [26] and position-dependent sampling [61], and loss functions such as pair-wise loss [26], triplet/quadratic loss [61] and binomial deviance loss [41]. This clearly shows the advantage of the proposed SBDL loss and its inherent property of treating training samples based on their degree of information.…”
Section: Results: Deepfashion Dataset 1) Comparison With State-of-mentioning
confidence: 99%
“…We set the dimensionality of φ k to D = 2048 and choose the parameters λ 1 , λ 2 , Σ using crossvalidation on the training set. The margin parameter γ is set to 0.2 as suggested in various works [33]. We train our model using stochastic gradient descent using an initial learning rate of 0.01 and momentum of 0.9.…”
Section: Implementation Details and Benchmarkingmentioning
confidence: 99%
“…Full model vs. triplet learning:. In this ablation experiment (Tab.3 (Right) Triplets Only) we use our extracted reliable (dis-)similarity relationships to mine triplet-constraints as input into a standard triplet-loss framework [33]. For a sample that serves as triplet anchor, reliable similarity relations act as positive constraints and reliable dissimilarity relations act as negative constraints.…”
Section: Ablation Studiesmentioning
confidence: 99%
“…Within this framework, techniques to sample triplets remains an active area of research. These include techniques such as hard-negative mining [7], semi-hard negative mining [21] and distance weighted sampling [28] to bias the sampling of triplets.…”
Section: Related Workmentioning
confidence: 99%