Over the past two decades, several studies have paid great attention to biometric palmprint recognition. Recently, most methods in literature adopted deep learning due to their high recognition accuracy and the capability to adapt with different acquisition palmprint images. However, high-dimensional data with a large number of uncorrelated and redundant features remain a challenge due to computational complexity issues. Feature selection is a process of selecting a subset of relevant features, which aims to decrease the dimensionality, reduce the running time, and improve the accuracy. In this paper, we propose efficient unimodal and multimodal biometric systems based on deep learning and feature selection. Our approach called simplified PalmNet–Gabor concentrates on the improvement of the PalmNet for fast recognition of multispectral and contactless palmprint images. Therefore, we used Log-Gabor filters in the preprocessing to increase the contrast of palmprint features. Then, we reduced the number of features using feature selection and dimensionality reduction procedures. For the multimodal system, we fused modalities at the matching score level to improve system performance. The proposed method effectively improves the accuracy of the PalmNet and reduces the number of features as well the computational time. We validated the proposed method on four public palmprint databases, two multispectral databases, CASIA and PolyU, and two contactless databases, Tongji and PolyU 2D/3D. Experiments show that our approach achieves a high recognition rate while using a substantially lower number of features.
Currently, face recognition technology is the most widely used method for verifying an individual’s identity. Nevertheless, it has increased in popularity, raising concerns about face presentation attacks, in which a photo or video of an authorized person’s face is used to obtain access to services. Based on a combination of background subtraction (BS) and convolutional neural network(s) (CNN), as well as an ensemble of classifiers, we propose an efficient and more robust face presentation attack detection algorithm. This algorithm includes a fully connected (FC) classifier with a majority vote (MV) algorithm, which uses different face presentation attack instruments (e.g., printed photo and replayed video). By including a majority vote to determine whether the input video is genuine or not, the proposed method significantly enhances the performance of the face anti-spoofing (FAS) system. For evaluation, we considered the MSU MFSD, REPLAY-ATTACK, and CASIA-FASD databases. The obtained results are very interesting and are much better than those obtained by state-of-the-art methods. For instance, on the REPLAY-ATTACK database, we were able to attain a half-total error rate (HTER) of 0.62% and an equal error rate (EER) of 0.58%. We attained an EER of 0% on both the CASIA-FASD and the MSU MFSD databases.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.