Periocular refers to the region around the eye, including sclera, eyelids, lashes, brows and skin. With a surprisingly high discrimination ability, it is the ocular modality requiring the least constrained acquisition. Here, we apply existing pre-trained architectures, proposed in the context of the ImageNet Large Scale Visual Recognition Challenge, to the task of periocular recognition. These have proven to be very successful for many other computer vision tasks apart from the detection and classification tasks for which they were designed. Experiments are done with a database of periocular images captured with a digital camera. We demonstrate that these offthe-shelf CNN features can effectively recognize individuals based on periocular images, despite being trained to classify generic objects. Compared against reference periocular features, they show an EER reduction of up to ∼40%, with the fusion of CNN and traditional features providing additional improvements.
Periocular recognition has gained attention in the last years thanks to its high discrimination capabilities in less constraint scenarios than other ocular modalities. In this paper we propose a method for periocular verification under different light spectra using CNN features with the particularity that the network has not been trained for this purpose. We use a ResNet-101 pretrained model for the Ima-geNet Large Scale Visual Recognition Challenge to extract features from the IIITD Multispectral Periocular Database. At each layer the features are compared using χ 2 distance and cosine similitude to carry on verification between images, achieving an improvement in the EER and accuracy at 1% FAR of up to 63.13% and 24.79% in comparison to previous works that employ the same database. In addition to this, we train a neural network to match the best CNN feature layer vector from each spectrum. With this procedure, we achieve improvements of up to 65% (EER) and 87% (accuracy at 1% FAR) in cross-spectral verification with respect to previous studies.
We address the use of selfie ocular images captured with smartphones to estimate age and gender. Partial face occlusion has become an issue due to the mandatory use of face masks. Also, the use of mobile devices has exploded, with the pandemic further accelerating the migration to digital services. However, state-of-the-art solutions in related tasks such as identity or expression recognition employ large Convolutional Neural Networks, whose use in mobile devices is infeasible due to hardware limitations and size restrictions of downloadable applications. To counteract this, we adapt two existing lightweight CNNs proposed in the context of the ImageNet Challenge, and two additional architectures proposed for mobile face recognition. Since datasets for softbiometrics prediction using selfie images are limited, we counteract over-fitting by using networks pre-trained on ImageNet. Furthermore, some networks are further pretrained for face recognition, for which very large training databases are available. Since both tasks employ similar input data, we hypothesise that such strategy can be beneficial for soft-biometrics estimation. A comprehensive study of the effects of different pretraining over the employed architectures is carried out, showing that, in most cases, a better accuracy is obtained after the networks have been fine-tuned for face recognition. | INTRODUCTIONRecent research has explored the automatic extraction of information such as gender, age, ethnicity, etc. of an individual, known as soft-biometrics [1]. It can be deduced from biometric data like face photos, voice, gait, hand or body images, etc. One of the most natural ways is face analysis [2], but given the use of masks due to the COVID-19 pandemic, the face appears occluded even in cooperative settings, leaving the ocular region as the only visible part. In recent years, the ocular region has gained attention as a stand-alone modality for a variety of tasks, including person recognition [3], softbiometrics estimation [4], or liveness detection [5]. Accordingly, this work is concerned with the challenge of estimating soft-biometrics when only the ocular region is available. Additionally, we are interested in mobile environments [6]. The pandemic has accelerated the migration to the digital domain, converting mobiles in data hubs used for all type of This is an open access article under the terms of the Creative Commons Attribution-NonCommercial License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited and is not used for commercial purposes.
This work addresses the challenge of comparing periocular images captured in different spectra, which is known to produce significant drops in performance in comparison to operating in the same spectrum. We propose the use of Conditional Generative Adversarial Networks, trained to convert periocular images between visible and near-infrared spectra, so that biometric verification is carried out in the same spectrum. The proposed setup allows the use of existing feature methods typically optimized to operate in a single spectrum. Recognition experiments are done using a number of off-the-shelf periocular comparators based both on hand-crafted features and CNN descriptors. Using the Hong Kong Polytechnic University Cross-Spectral Iris Images Database (PolyU) as benchmark dataset, our experiments show that cross-spectral performance is substantially improved if both images are converted to the same spectrum, in comparison to matching features extracted from images in different spectra. In addition to this, we finetune a CNN based on the ResNet50 architecture, obtaining a cross-spectral periocular performance of EER=1%, and GAR>99% @ FAR=1%, which is comparable to the stateof-the-art with the PolyU database.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.