“…Towards the goal of efficient periocular recognition, Almadan et al [51] utilized conventional KD to train MobileNet-V2 [52] (3.5 m parameters), MobileNet-V3 [53] (2.5 m parameters), ResNet-20 [44] (1.3 m parameters), and ShuffleNetV2-50 [54] (1.4 m parameters) with ResNet50 [44] as a teacher model. These models were trained and evaluated on VISOB [55] and UFPR datasets [56]. Among the evaluated models, MobileNet-V2 (3.5 m parameters) achieved the lowest EER: 5.21% on VISOB and 5.38% on the UFPR dataset.…”
This work addresses the challenge of building an accurate and generalizable periocular recognition model with a small number of learnable parameters. Deeper (larger) models are typically more capable of learning complex information. For this reason, knowledge distillation (kd) was previously proposed to carry this knowledge from a large model (teacher) into a small model (student). Conventional KD optimizes the student output to be similar to the teacher output (commonly classification output). In biometrics, comparison (verification) and storage operations are conducted on biometric templates, extracted from pre-classification layers. In this work, we propose a novel template-driven KD approach that optimizes the distillation process so that the student model learns to produce templates similar to those produced by the teacher model. We demonstrate our approach on intra- and cross-device periocular verification. Our results demonstrate the superiority of our proposed approach over a network trained without KD and networks trained with conventional (vanilla) KD. For example, the targeted small model achieved an equal error rate (EER) value of 22.2% on cross-device verification without KD. The same model achieved an EER of 21.9% with the conventional KD, and only 14.7% EER when using our proposed template-driven KD.
“…Towards the goal of efficient periocular recognition, Almadan et al [51] utilized conventional KD to train MobileNet-V2 [52] (3.5 m parameters), MobileNet-V3 [53] (2.5 m parameters), ResNet-20 [44] (1.3 m parameters), and ShuffleNetV2-50 [54] (1.4 m parameters) with ResNet50 [44] as a teacher model. These models were trained and evaluated on VISOB [55] and UFPR datasets [56]. Among the evaluated models, MobileNet-V2 (3.5 m parameters) achieved the lowest EER: 5.21% on VISOB and 5.38% on the UFPR dataset.…”
This work addresses the challenge of building an accurate and generalizable periocular recognition model with a small number of learnable parameters. Deeper (larger) models are typically more capable of learning complex information. For this reason, knowledge distillation (kd) was previously proposed to carry this knowledge from a large model (teacher) into a small model (student). Conventional KD optimizes the student output to be similar to the teacher output (commonly classification output). In biometrics, comparison (verification) and storage operations are conducted on biometric templates, extracted from pre-classification layers. In this work, we propose a novel template-driven KD approach that optimizes the distillation process so that the student model learns to produce templates similar to those produced by the teacher model. We demonstrate our approach on intra- and cross-device periocular verification. Our results demonstrate the superiority of our proposed approach over a network trained without KD and networks trained with conventional (vanilla) KD. For example, the targeted small model achieved an equal error rate (EER) value of 22.2% on cross-device verification without KD. The same model achieved an EER of 21.9% with the conventional KD, and only 14.7% EER when using our proposed template-driven KD.
“…Thorough evaluation of fine-tuned CNNs suggests efficacy of ResNet-50, LightCNN and MobileNet in mobile ocular recognition [21]. Datasets such as MICHE-I [7] (92 subjects) and VISOB 1.0 [16] (550 subjects) have been assembled for ocular recognition in mobile devices. VISOB 1.0 dataset was used in the IEEE 2016 ICIP international competition for mobile ocular biometrics.…”
Section: Introductionmentioning
confidence: 99%
“…Recent interest has been in using subject-independent evaluation of these ocular recognition methods where subjects do not overlap between the training and testing set to simulate realistic scenarios. To this front, VISOB 2.0 competition [16] in IEEE WCCI 2020 conference has been organized using VISOB 2.0 database. VISOB 2.0 [16] is a new version of the VISOB 1.0 dataset where the region of interest is extended from the eye (iris, conjunctival, and episcleral vasculature) to periocular (a region encompassing the eye).…”
Section: Introductionmentioning
confidence: 99%
“…To this front, VISOB 2.0 competition [16] in IEEE WCCI 2020 conference has been organized using VISOB 2.0 database. VISOB 2.0 [16] is a new version of the VISOB 1.0 dataset where the region of interest is extended from the eye (iris, conjunctival, and episcleral vasculature) to periocular (a region encompassing the eye). Further, the evaluation protocol followed is subject-independent, over subject-dependent evaluation in IEEE ICIP VISOB 1.0 competition [20].…”
Section: Introductionmentioning
confidence: 99%
“…All the experiments are conducted on VISOB 2.0 dataset [16], which facilitates subject-independent evaluation across three lighting conditions; office, daylight, and dark light. This paper is organized as follows: section 2 details deep learning architectures used in this study for ocular analysis.…”
Recent research has questioned the fairness of face-based recognition and attribute classification methods (such as gender and race) for dark-skinned people and women. Ocular biometrics in the visible spectrum is an alternate solution over face biometrics, thanks to its accuracy, security, robustness against facial expression, and ease of use in mobile devices. With the recent COVID-19 crisis, ocular biometrics has a further advantage over face biometrics in the presence of a mask. However, fairness of ocular biometrics has not been studied till now. This first study aims to explore the fairness of ocular-based authentication and gender classification methods across males and females. To this aim, VISOB 2.0 dataset, along with its gender annotations, is used for the fairness analysis of ocular biometrics methods based on ResNet-50, MobileNet-V2 and lightCNN-29 models. Experimental results suggest the equivalent performance of males and females for ocular-based mobile user-authentication in terms of genuine match rate (GMR) at lower false match rates (FMRs) and an overall Area Under Curve (AUC). For instance, an AUC of 0.96 for females and 0.95 for males was obtained for lightCNN-29 on an average. However, males significantly outperformed females in deep learning based gender classification models based on ocular-region.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.