2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition 2018
DOI: 10.1109/cvpr.2018.00534
|View full text |Cite
|
Sign up to set email alerts
|

Ring Loss: Convex Feature Normalization for Face Recognition

Abstract: We motivate and present Ring loss, a simple and elegant feature normalization approach for deep networks designed to augment standard loss functions such as Softmax. We argue that deep feature normalization is an important aspect of supervised classification problems where we require the model to represent each class in a multi-class problem equally well. The direct approach to feature normalization through the hard normalization operation results in a non-convex formulation. Instead, Ring loss applies soft no… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
122
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
4
4
1

Relationship

0
9

Authors

Journals

citations
Cited by 193 publications
(122 citation statements)
references
References 25 publications
0
122
0
Order By: Relevance
“…on the Source Only model. This technique is widely used to characterize the feature embeddings under the softmax-related objectives [47,21,51]. Specifically, we set the task-specific features to be two-dimensional and retrain the model.…”
Section: Introductionmentioning
confidence: 99%
“…on the Source Only model. This technique is widely used to characterize the feature embeddings under the softmax-related objectives [47,21,51]. Specifically, we set the task-specific features to be two-dimensional and retrain the model.…”
Section: Introductionmentioning
confidence: 99%
“…Metric learning is common in few-shot learning. Multiple improvements of the standard softmax and cross-entropy loss are proposed by [49,22,53,45,9] to this end. Traditional methods like siamese networks are also considered [3,40,18] along with models that learn by comparing multiple samples at once [44,50,42].…”
Section: Related Workmentioning
confidence: 99%
“…The L2-constraint based deep length normalization explained in Section 3.2 uses the norm constraint right before the softmax loss. However, according to [22], such a direct approach through the hard normalization operation results in a nonconvex formulation. It results in local minima generated by the loss function itself and leads to difficulties in optimization.…”
Section: Ring Loss-based Deep Length Normalizationmentioning
confidence: 99%
“…The final speaker representation is produced by aggregating the embeddings from each sub-region. Furthermore, we apply convex length normalization using ring loss [22] to normalize the speaker embedding. We show that ring loss-based deep length normalization performs better than the L2-constraint based one.…”
Section: Introductionmentioning
confidence: 99%