Existing methods for gender classification from facial images mostly rely on either shape or texture cues. This paper presents a novel face representation that combines both shape and texture information for gender classification. We propose extracting the Scale Invariant Feature Transform (SIFT) descriptors at specific facial landmarks positions, hence encoding both the face shape and local-texture information. Moreover, we propose a decision-level fusion framework combining this Landmarks-SIFT with Local Binary Patterns (LBP) descriptor extracted for the whole face image. LBP is known of being tolerant against uncontrolled image capturing conditions. Competitive correct classification rates for both controlled (97% for FERET) and uncontrolled (95% and 94% for LFW and KinFace) benchmark datasets were achieved using our proposed decision-level fusion.
Biometric presentation attack detection (PAD) is gaining increasing attention. Users of mobile devices find it more convenient to unlock their smart applications with finger, face, or iris recognition instead of passwords. In this study, the authors survey the approaches presented in the recent literature to detect face and iris presentation attacks. Specifically, they investigate the effectiveness of fine-tuning very deep convolutional neural networks to the task of face and iris antispoofing. They compare two different fine-tuning approaches on six publicly available benchmark datasets. Results show the effectiveness of these deep models in learning discriminative features that can tell apart real from fake biometric images with a very low error rate. Cross-dataset evaluation on face PAD showed better generalisation than state-of-the-art. They also performed cross-dataset testing on iris PAD datasets in terms of equal error rate, which was not reported in the literature before. Additionally, they propose the use of a single deep network trained to detect both face and iris attacks. They have not noticed accuracy degradation compared to networks trained for only one biometric separately. Finally, they analysed the learned features by the network, in correlation with the image frequency components, to justify its prediction decision.
This paper presents a novel method for combining the outputs of different gender classification techniques based on facial images. Merging the methods is performed by a committee machine using the Bayesian theorem.We implement and compare several well-known individual classifiers on four different datasets, then we experiment the proposed machine, and show that it significantly improves the accuracy of classification compared to individual classifiers. We also include results that address the effect of scale on the performance of classifiers. I. INTRODUCTIONFacial analysis has been widely investigated in computer vision, including gender, age and expression classification. In particular, gender discrimination is important for several applications; it can improve the performance systems of face verification [1] and face recognition by using separate models for each gender [2], [3], it can help index and retrieve images [4], and it is useful for training interaction systems that behave differently according to the gender of the user.The accuracy of individual gender classification methods can be boosted by merging more than one classifier [5]. When these classifiers use different input features extracted from the face; there is a higher probability that their false classifications on a set of images are disjoint, in which case merge is helpful to minimize the final error of the combined classifier.In this paper we propose a committee machine for merging classification methods based on naive Bayesian theorem, and show, on four different image databases, how this combination improves the performance over the best single constituent classifier by up to more than 4%.The paper is organized as follows; Section II presents an overview of the previous related work. Section III describes the individual classification methods used and Section IV introduces our proposed method for merging these classifiers. Section V explains the experiments we carried and the databases we used, then the results achieved. In the final section, we conclude our work.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.