2017 IEEE International Conference on Computer Vision (ICCV) 2017
DOI: 10.1109/iccv.2017.578
|View full text |Cite
|
Sign up to set email alerts
|

Range Loss for Deep Face Recognition with Long-Tailed Training Data

Abstract: Convolutional neural networks have achieved great improvement on face recognition in recent years because of its extraordinary ability in learning discriminative features of people with different identities. To train such a welldesigned deep network, tremendous amounts of data is indispensable. Long tail distribution specifically refers to the fact that a small number of generic entities appear frequently while other objects far less existing. Considering the existence of long tail distribution of the real wor… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
259
0

Year Published

2018
2018
2020
2020

Publication Types

Select...
5
2
1

Relationship

1
7

Authors

Journals

citations
Cited by 381 publications
(261 citation statements)
references
References 37 publications
2
259
0
Order By: Relevance
“…[45] makes samples uniformly distributed by random sampling. [31] proposes a range loss to balance the rich and poor classes, where the largest intra-class distance is reduced and the shortest class-center distance is enlarged.…”
Section: Learning With Insufficient Datamentioning
confidence: 99%
“…[45] makes samples uniformly distributed by random sampling. [31] proposes a range loss to balance the rich and poor classes, where the largest intra-class distance is reduced and the shortest class-center distance is enlarged.…”
Section: Learning With Insufficient Datamentioning
confidence: 99%
“…In the early years, most face recognition approaches utilized metric loss functions, such as triplet loss [30] and contrastive loss [2], which use Euclidean margin to measure distance between features. Taking advantages of these works, center loss [31] and range loss [33] were proposed to reduce intra-class variations through minimizing distance within target classes [1].…”
Section: Related Workmentioning
confidence: 99%
“…One category is loss functions that optimize the Euclidean distance. Examples include contrastive loss [23], range loss [31], triplet loss [22], center loss [27] and marginal loss [8]. They cluster samples from the same identity by minimizing the intra-class Euclidean distances.…”
Section: Deep Face Recognitionmentioning
confidence: 99%
“…It is robust to unbalanced distribution over identities (e.g. long-tailed distribution [31]). Even many of the identities have only one or two images in the unlabeled dataset, our proposed UIR loss is still able to utilize them without attenuating the performance.…”
Section: Introductionmentioning
confidence: 99%