“…Numerous recent research works are focused on the development of small and efficient neural networks suitable for systems with limited resources, for instance mobile devices. A common approach is reducing the amount of parameters in the convolutions, with the MobileNet [24,25], Shufflenet [26,27], and Xception [28] models utilizing depth-wise separable convolutions.Rattani et al [16,18,19] was one of the pioneers who carried out recognition of age or gender from RGB ocular mobile devices images .…”
Soft Biometrics is a growing field that has been known to improve the recognition system as witnessed in the past decade. When combined with hard biometrics like iris, gait, fingerprint recognition etc. it has been seen that the efficiency of the system increases many folds. With the Pandemic came the need to recognise faces covered with mask in an efficient way- soft biometrics proved to be an aid in this. While recent advances in computer vision have helped in the estimation of age and gender - the system could be improved by extending the scope and detecting quite a few other soft biometric attributes that helps us in identifying a person, including but not limited to - eyeglasses, hair type and color, mustache, eyebrows etc. In this paper we propose a system of identification that uses the ocular and forehead part of the face as modalities to train our models that uses transfer learning techniques to help in the detection of 12 soft biometric attributes (FFHQ dataset) and 25 soft biometric attributes (CelebA dataset) for masked faces. We compare the results with the unmasked faces in order to see the variation of efficiency using these data-sets Throughout the paper we have implemented 4 enhanced models namely - enhanced Alexnet ,enhanced Resnet50, enhanced MobilenetV2 and enhanced SqueezeNet. The enhanced models apply transfer learning to the normal models and aids in improving accuracy. In the end we compare the results and see how the accuracy varies according to the model used and whether the images are masked or unmasked. We conclude that for images containing facial masks - using enhanced MobileNet would give a splendid accuracy of 92.5% (for FFHQ dataset) and 87% (for CelebA dataset).
“…Numerous recent research works are focused on the development of small and efficient neural networks suitable for systems with limited resources, for instance mobile devices. A common approach is reducing the amount of parameters in the convolutions, with the MobileNet [24,25], Shufflenet [26,27], and Xception [28] models utilizing depth-wise separable convolutions.Rattani et al [16,18,19] was one of the pioneers who carried out recognition of age or gender from RGB ocular mobile devices images .…”
Soft Biometrics is a growing field that has been known to improve the recognition system as witnessed in the past decade. When combined with hard biometrics like iris, gait, fingerprint recognition etc. it has been seen that the efficiency of the system increases many folds. With the Pandemic came the need to recognise faces covered with mask in an efficient way- soft biometrics proved to be an aid in this. While recent advances in computer vision have helped in the estimation of age and gender - the system could be improved by extending the scope and detecting quite a few other soft biometric attributes that helps us in identifying a person, including but not limited to - eyeglasses, hair type and color, mustache, eyebrows etc. In this paper we propose a system of identification that uses the ocular and forehead part of the face as modalities to train our models that uses transfer learning techniques to help in the detection of 12 soft biometric attributes (FFHQ dataset) and 25 soft biometric attributes (CelebA dataset) for masked faces. We compare the results with the unmasked faces in order to see the variation of efficiency using these data-sets Throughout the paper we have implemented 4 enhanced models namely - enhanced Alexnet ,enhanced Resnet50, enhanced MobilenetV2 and enhanced SqueezeNet. The enhanced models apply transfer learning to the normal models and aids in improving accuracy. In the end we compare the results and see how the accuracy varies according to the model used and whether the images are masked or unmasked. We conclude that for images containing facial masks - using enhanced MobileNet would give a splendid accuracy of 92.5% (for FFHQ dataset) and 87% (for CelebA dataset).
“…Some ocular attributes, such as pupil position and radius, have also been used for user profiling in [ 62 , 63 ]. In these cases, CNN were utilized to predict the age and gender of different users.…”
Ensuring the confidentiality of private data stored in our technological devices is a fundamental aspect for protecting our personal and professional information. Authentication procedures are among the main methods used to achieve this protection and, typically, are implemented only when accessing the device. Nevertheless, in many occasions it is necessary to carry out user authentication in a continuous manner to guarantee an allowed use of the device while protecting authentication data. In this work, we first review the state of the art of Continuous Authentication (CA), User Profiling (UP), and related biometric databases. Secondly, we summarize the privacy-preserving methods employed to protect the security of sensor-based data used to conduct user authentication, and some practical examples of their utilization. The analysis of the literature of these topics reveals the importance of sensor-based data to protect personal and professional information, as well as the need for exploring a combination of more biometric features with privacy-preserving approaches.
“…4) Soft biometrics Rattani et al [29] used shallow CNN (with six hidden layers) to estimate gender and age in periocular samples acquired from handheld devices. They concluded that such frameworks still have enough discriminating power, even in case of poor-quality samples.…”
Convolutional neural networks (CNNs) have emerged as the most popular classification models in biometrics research. Under the discriminative paradigm of pattern recognition, CNNs are used typically in one of two ways: 1) verification mode ("are samples from the same person?"), where pairs of images are provided to the network to distinguish between genuine and impostor instances; and 2) identification mode ("whom is this sample from?"), where appropriate feature representations that map images to identities are found. This paper postulates a novel mode for using CNNs in biometric identification, by learning models that answer to the question "is the query's identity among this set?". The insight is a reminiscence of the classical Mastermind game: by iteratively analysing the network responses when multiple random samples of k gallery elements are compared to the query, we obtain weakly correlated matching scores that-altogether-provide solid cues to infer the most likely identity. In this setting, identification is regarded as a variable selection and regularization problem, with sparse linear regression techniques being used to infer the matching probability with respect to each gallery identity. As main strength, this strategy is highly robust to outlier matching scores, which are known to be a primary error source in biometric recognition. Our experiments were carried out in full versions of two well known irises near-infrared (CASIA-IrisV4-Thousand) and periocular visible wavelength (UBIRIS.v2) datasets, and confirm that recognition performance can be solidly boosted-up by the proposed algorithm, when compared to the traditional working modes of CNNs in biometrics.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.