2022
DOI: 10.3390/s22051921
|View full text |Cite
|
Sign up to set email alerts
|

Template-Driven Knowledge Distillation for Compact and Accurate Periocular Biometrics Deep-Learning Models

Abstract: This work addresses the challenge of building an accurate and generalizable periocular recognition model with a small number of learnable parameters. Deeper (larger) models are typically more capable of learning complex information. For this reason, knowledge distillation (kd) was previously proposed to carry this knowledge from a large model (teacher) into a small model (student). Conventional KD optimizes the student output to be similar to the teacher output (commonly classification output). In biometrics, … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2022
2022
2025
2025

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 10 publications
(3 citation statements)
references
References 47 publications
(66 reference statements)
0
3
0
Order By: Relevance
“…In another study, Boutros et al [47] proposed a novel template-driven KD approach that optimized the distillation process so that the student model learned to produce templates similar to those produced by the teacher model. This was obtained by introducing an additional loss function over the original KD loss operating on the feature extraction layer.…”
Section: Prior Workmentioning
confidence: 99%
“…In another study, Boutros et al [47] proposed a novel template-driven KD approach that optimized the distillation process so that the student model learned to produce templates similar to those produced by the teacher model. This was obtained by introducing an additional loss function over the original KD loss operating on the feature extraction layer.…”
Section: Prior Workmentioning
confidence: 99%
“…The biometric literature refers to the recognition of this area, when the iris is not exclusively targeted, as periocular recognition [106]. Periocular recognition can include the periocular region of one eye for some applications [107,108]. However, in the masked face recognition scenario, both right and left periocular regions are typically considered.…”
Section: Enhancing Masked Face Recognitionmentioning
confidence: 99%
“…KD transfers the acquired knowledge learned by a larger network to a smaller one [37], [38]. KD has shown great success in improving the verification performance of compact FR models [10], [11], [39]. Furthermore, the combination of KD with other model compression techniques such as compact model design [11], or NAS [10], [17] demonstrated very promising accuracies in FR.…”
Section: Introductionmentioning
confidence: 99%