Twelfth International Conference on Machine Vision (ICMV 2019) 2020
DOI: 10.1117/12.2557244
|View full text |Cite
|
Sign up to set email alerts
|

Margin based knowledge distillation for mobile face recognition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
3
3

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(2 citation statements)
references
References 0 publications
0
2
0
Order By: Relevance
“…Knowledge distillation [80], [81] is a useful method of transferring knowledge from large teacher models to the smaller student models. There are several knowledge distillation methods, suitable for face recognition models [82]- [84], but they are not adapted to the Prototype Memory approach. For example, to initialize the training, [83] needs class centers for each class, calculated by teacher network.…”
Section: F Prototype Memory Knowledge Distillationmentioning
confidence: 99%
“…Knowledge distillation [80], [81] is a useful method of transferring knowledge from large teacher models to the smaller student models. There are several knowledge distillation methods, suitable for face recognition models [82]- [84], but they are not adapted to the Prototype Memory approach. For example, to initialize the training, [83] needs class centers for each class, calculated by teacher network.…”
Section: F Prototype Memory Knowledge Distillationmentioning
confidence: 99%
“…Knowledge distillation is a useful method of transferring knowledge from large teacher models to the smaller student models. There are several knowledge distillation methods, suitable for face recognition models [78]- [80], but they are not adapted to the Prototype Memory approach. For example, to initialize the training, [79] needs class centers for each class, calculated by teacher network.…”
Section: F Prototype Memory Knowledge Distillationmentioning
confidence: 99%