2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2021
DOI: 10.1109/cvpr46437.2021.01173
|View full text |Cite
|
Sign up to set email alerts
|

Variational Prototype Learning for Deep Face Recognition

Abstract: Deep face recognition has achieved remarkable improvements due to the introduction of margin-based softmax loss, in which the prototype stored in the last linear layer represents the center of each class. In these methods, training samples are enforced to be close to positive prototypes and far apart from negative prototypes by a clear margin. However, we argue that prototype learning only employs sample-to-prototype comparisons without considering sample-to-sample comparisons during training and the low loss … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
24
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 51 publications
(24 citation statements)
references
References 51 publications
(119 reference statements)
0
24
0
Order By: Relevance
“…Moreover, CurricularFace (Huang et al 2020) and MV-Arc-Softmax (Wang et al 2020) are used to introduce the mining-based strategies to emphasize the mis-classified samples. The recent work VPL (Deng et al 2021) first analyzes the limitations of previous methods, which employ sample-to-prototype comparisons during training without considering sample-to-sample comparisons, and then introduces the sample-to-sample comparisons into the classification framework for FR. In contrast to existing works, our proposed AnchorFace discusses the necessity of the optimization under the Anchor FAR (i.e., Anchor Optimization) for practical FR from a new perspective, and introduces a pair of loss functions to and reduce the gap of the training and evaluation for FR.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…Moreover, CurricularFace (Huang et al 2020) and MV-Arc-Softmax (Wang et al 2020) are used to introduce the mining-based strategies to emphasize the mis-classified samples. The recent work VPL (Deng et al 2021) first analyzes the limitations of previous methods, which employ sample-to-prototype comparisons during training without considering sample-to-sample comparisons, and then introduces the sample-to-sample comparisons into the classification framework for FR. In contrast to existing works, our proposed AnchorFace discusses the necessity of the optimization under the Anchor FAR (i.e., Anchor Optimization) for practical FR from a new perspective, and introduces a pair of loss functions to and reduce the gap of the training and evaluation for FR.…”
Section: Related Workmentioning
confidence: 99%
“…Meanwhile, as discussed in VPL (Deng et al 2021), features drift slowly for FR models, which indicates that features extracted previously can be considered as an approximation of the output of the current network within a certain number of training steps. Therefore, we also create a validness indicator V ∈ R N ×K to represent the validness of each feature in the online-updating set S. Each item in V is a scalar value, which denotes the remaining valid steps for the corresponding feature in the online-updating set S. The maximum number of valid steps for each feature is M , and we initialize all items of V as 0 at the beginning of the training process.…”
Section: Anchorfacementioning
confidence: 99%
See 1 more Smart Citation
“…However, as the usage of tuples introduces an exponential increase in sample complex-ity [94,115], similar emphasis has also been placed in tuple selection heuristics to boost training speeds and generalization, either based on sample distances [94,100,110,115], hierarchical arrangements [33] or adapted to the training process [38,89]. Tuple complexity can also be addressed using proxies as stand-in replacements in the generation of tuples [20,50,71,82,102,125]. However, while literature results suggests increasing generalization performance based on simple changes in re-ranking and tuple selection, recent work has instead highlighted a much stronger saturation in method performance [30,72,91], underlining the importance of fair and comparable training and evaluation protocols with fixed backbone network and pipeline parameter choices.…”
Section: Related Workmentioning
confidence: 99%
“…Recent FR systems [8,9,22] report face verification results exceeding 99.5% on the arguably simple labeled faces in the wild (LFW) dataset [17], yet also reach 92% under more challenging cross-age and cross-pose scenarios. First analyses of FR performance under morphing attacks have been published before [23,27,35,37,39,40,44].…”
Section: Introductionmentioning
confidence: 99%