2019
DOI: 10.1109/tpami.2019.2911075
|View full text |Cite
|
Sign up to set email alerts
|

Significance of Softmax-based Features in Comparison to Distance Metric Learning-based Features

Abstract: End-to-end distance metric learning (DML) has been applied to obtain features useful in many computer vision tasks. However, these DML studies have not provided equitable comparisons between features extracted from DML-based networks and softmax-based networks. In this paper, we present objective comparisons between these two approaches under the same network architecture.

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
28
1

Year Published

2019
2019
2021
2021

Publication Types

Select...
3
3
3

Relationship

1
8

Authors

Journals

citations
Cited by 53 publications
(32 citation statements)
references
References 30 publications
(77 reference statements)
1
28
1
Order By: Relevance
“…In addition to OSV, these two categories of loss functions were also studied for different classification tasks, and the similar comparative results were reported (Horiguchi, Ikami, & Aizawa, 2016;Janocha & Czarnecki, 2017). The work (Horiguchi et al, 2016) confirmed this result by performing a fair comparison between Cross-Entropy (CE) loss as a classification loss and several state-of-the-art metric learning losses.…”
Section: Introductionsupporting
confidence: 57%
“…In addition to OSV, these two categories of loss functions were also studied for different classification tasks, and the similar comparative results were reported (Horiguchi, Ikami, & Aizawa, 2016;Janocha & Czarnecki, 2017). The work (Horiguchi et al, 2016) confirmed this result by performing a fair comparison between Cross-Entropy (CE) loss as a classification loss and several state-of-the-art metric learning losses.…”
Section: Introductionsupporting
confidence: 57%
“…In particular, several sampling strategies are widely investigated to improve the performance, such as hard mining [16], semihard mining [35], smart mining [13] and so on. In comparison, softmax embedding achieves competitive performance without sampling requirement [18]. Supervised learning has achieved superior performance on various tasks, but they still rely on enough annotated data.…”
Section: Related Workmentioning
confidence: 99%
“…Finally, a Fully Connected layer flattens the previous layer volume into a feature vector and then an output layer which computes the scores (confidence or probabilities) for the output classes/features through a dense network. This output is then passed to a regression function such as Softmax [12], for example, which maps everything to a vector whose elements sum up to one [7].…”
Section: Advantages Of Deep Learningmentioning
confidence: 99%