Multimodal biometric systems are considered a way to minimize the limitations raised by single traits. This paper proposes new schemes based on score level, feature level and decision level fusion to efficiently fuse face and iris modalities. Log-Gabor transformation is applied as the feature extraction method on face and iris modalities. At each level of fusion, different schemes are proposed to improve the recognition performance and, finally, a combination of schemes at different fusion levels constructs an optimized and robust scheme. In this study, CASIA Iris Distance database is used to examine the robustness of all unimodal and multimodal schemes. In addition, Backtracking Search Algorithm (BSA), a novel population-based iterative evolutionary algorithm, is applied to improve the recognition accuracy of schemes by reducing the number of features and selecting the optimized weights for feature level and score level fusion, respectively. Experimental results on verification rates demonstrate a significant improvement of proposed fusion schemes over unimodal and multimodal fusion methods.
Designing a new dynamic and optimal scheme for face-iris fusion based on the score level, feature level and decision level fusion is considered in this study. Prior to implementing the proposed combined level fusion, several schemes are separately implemented at each level of fusion to investigate the performance improvement of each level of fusion on face and iris modalities. In fact, the optimum scheme is constructed by selecting flexible and dynamic features and scores of face and iris biometrics and then combining the advantages of different levels of fusion. Consequently, the scheme produces a set of fast and flexible features and scores for fusion. On the other hand, the idea of threshold-optimised decisions is used in this study to fuse the optimised decisions of face and iris biometrics. Experimental results on verification rates demonstrate a significant improvement of proposed combined level fusion scheme over unimodal and multimodal fusion methods.
Abstract:Cosmetics pose challenges to the recognition performance of face and iris biometric systems due to its ability to alter natural facial and iris patterns. Facial makeup and iris contact lens are considered to be commonly applied cosmetics for the face and iris in this study. The present work aims to present a novel solution for the detection of cosmetics in both face and iris biometrics by the fusion of texture, shape and color descriptors of images. The proposed cosmetic detection scheme combines the microtexton information from the local primitives of texture descriptors with the color spaces achieved from overlapped blocks in order to achieve better detection of spots, flat areas, edges, edge ends, curves, appearance and colors. The proposed cosmetic detection scheme was applied to the YMU YouTube makeup database (YMD) facial makeup database and IIIT-Delhi Contact Lens iris database. The results demonstrate that the proposed cosmetic detection scheme is significantly improved compared to the other schemes implemented in this study.
This study is concerned with analysing face-ocular multimodal biometric systems for a person gender prediction. Particularly, this is the first study considering fusion of face and ocular biometrics to predict gender of a person via a hybrid multimodal scheme. The authors aim to investigate the effect of multimodal biometric systems at score and feature-level fusion on gender classification. The implementation of uniform local binary pattern (ULBP) feature extractor is taken into account to extract the face and ocular texture information. This paper proposes to select the efficient feature sets of both modalities using a novel evolutionary algorithm called backtracking search algorithm (BSA). On the other hand, support vector machine (SVM) is applied for classification purpose using the fused face and ocular features and scores. The proposed scheme is validated using CASIA-Iris-Distance and MBGC multimodal biometric databases with consideration of a subject-disjoint training and testing evaluation. The achieved gender recognition demonstrates the superiority of the hybrid multimodal face-ocular scheme over unimodal face and ocular schemes implemented in this study for a subject gender prediction.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.