2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2019
DOI: 10.1109/cvpr.2019.01108
|View full text |Cite
|
Sign up to set email alerts
|

AdaCos: Adaptively Scaling Cosine Logits for Effectively Learning Deep Face Representations

Abstract: The cosine-based softmax losses [21,28,39,8] and their variants [40,38,7] achieve great success in deep learning based face recognition. However, hyperparameter settings in these losses have significant influences on the optimization path as well as the final recognition performance. Manually tuning those hyperparameters heavily relies on user experience and requires many training tricks.In this paper, we investigate in depth the effects of two important hyperparameters of cosine-based softmax losses, the scal… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
119
0

Year Published

2019
2019
2021
2021

Publication Types

Select...
5
3

Relationship

1
7

Authors

Journals

citations
Cited by 192 publications
(119 citation statements)
references
References 38 publications
(158 reference statements)
0
119
0
Order By: Relevance
“…It also designs the appropriate loss function that can enhance the discriminative power of DCNNs based, large-scale face recognition. However, cosine-based softmax losses [167][168][169] provide better results in deep learning-based face recognition. High discriminative features were achieved using an Additive Angular Margin Loss(AcrFace) for face recognition [170].…”
Section: ) Object Detection In Surveillancementioning
confidence: 99%
“…It also designs the appropriate loss function that can enhance the discriminative power of DCNNs based, large-scale face recognition. However, cosine-based softmax losses [167][168][169] provide better results in deep learning-based face recognition. High discriminative features were achieved using an Additive Angular Margin Loss(AcrFace) for face recognition [170].…”
Section: ) Object Detection In Surveillancementioning
confidence: 99%
“…First, we plan to apply GO loss to other datasets for a thorough evaluation of its performance under different application scenarios. Second, we will propose a method to quantitatively determine the value of the hyperparameters, such as by visual analytics [6] or adaptive scaling [47].…”
Section: Resultsmentioning
confidence: 99%
“…The choice of the scaling hyper-parameter usually relies on heuristic trials, which are both time consuming and inconvenient to use. The automatic selection of has been discussed in [29]. Inspired by these efforts, we designed a simple scheme to automatically determine this scale parameter for different logits, so that their scale ranges are the same.…”
Section: Automatic Scale Parameter Selectionmentioning
confidence: 99%
“…Similarly, NormFace [24] normalized both feature vectors and weight vectors to optimize cosine similarity instead of the inner products in the softmax loss, thereby effectively improving the angular discrimination of the features. Besides, the adaptive selection of the scale and margin hyper-parameters were studied in [28,29,47]. Zhang et al studied the settings of scale and angular margin parameter in cosine-based softmax losses and proposed AdaCos [29] to adaptively scale cosine logits to enhance the supervision during training.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation