Face recognition (FR) is defined as the process through which people are identified using facial images. This technology is applied broadly in biometrics, security information, accessing controlled areas, keeping of the law by different enforcement bodies, smart cards, and surveillance technology. The facial recognition system is built using two steps. The first step is a process through which the facial features are picked up or extracted, and the second step is pattern classification. Deep learning, specifically the convolutional neural network (CNN), has recently made commendable progress in FR technology. This paper investigates the performance of the pre-trained CNN with multi-class support vector machine (SVM) classifier and the performance of transfer learning using the AlexNet model to perform classification. The study considers CNN architecture, which has so far recorded the best outcome in the ImageNet Large-Scale Visual Recognition Challenge (ILSVRC) in the past years, more specifically, AlexNet and ResNet-50. In order to determine performance optimization of the CNN algorithm, recognition accuracy was used as a determinant. Improved classification rates were seen in the comprehensive experiments that were completed on the various datasets of ORL, GTAV face, Georgia Tech face, labelled faces in the wild (LFW), frontalized labeled faces in the wild (F_LFW), YouTube face, and FEI faces. The result showed that our model achieved a higher accuracy compared to most of the state-of-the-art models. An accuracy range of 94% to 100% for models with all databases was obtained. Also, this was obtained with an improvement in recognition accuracy up to 39%.
Iris is a powerful tool for reliable human identification. It has the potential to identify individuals with a high degree of assurance. Extracting good features is the most significant step in the iris recognition system. In the past, different features have been used to implement iris recognition system. Most of them are depend on hand-crafted features designed by biometrics specialists. Due to the success of deep learning in computer vision problems, the features learned by the Convolutional Neural Network (CNN) have gained much attention to be applied for iris recognition system. In this paper, we evaluate the extracted learned features from a pre-trained Convolutional Neural Network (Alex-Net Model) followed by a multi-class Support Vector Machine (SVM) algorithm to perform classification. The performance of the proposed system is investigated when extracting features from the segmented iris image and from the normalized iris image. The proposed iris recognition system is tested on four public datasets IITD, iris databases CASIA-Iris-V1, CASIA-Iris-thousand and, CASIA-Iris-V3 Interval. The system achieved excellent results with the very high accuracy rate.
Chromosome analysis is an essential task in a cytogenetics lab, where cytogeneticists can diagnose whether there are abnormalities or not. Karyotyping is a standard technique in chromosome analysis that classifies metaphase image to 24 chromosome classes. The main two categories of chromosome abnormalities are structural abnormalities that are changing in the structure of chromosomes and numerical abnormalities which include either monosomy (missing one chromosome) or trisomy (extra copy of the chromosome). Manual karyotyping is complex and requires high domain expertise, as it takes an amount of time. With these motivations, in this research, we used deep learning to automate karyotyping to recognize the common numerical abnormalities on a dataset containing 147 non-overlapped metaphase images collected from the Center of Excellence in Genomic Medicine Research at King Abdulaziz University. The metaphase images went through three stages. The first one is individual chromosomes detection using YOLOv2 Convolutional Neural Network followed by some chromosome post-processing. This step achieved 0.84 mean IoU, 0.9923 AP, and 100% individual chromosomes detection accuracy. The second stage is feature extraction and classification where we fine-tune VGG19 network using two different approaches, one by adding extra fully connected layer(s) and another by replacing fully connected layers with the global average pooling layer. The best accuracy obtained is 95.04%. The final step is detecting abnormality and this step obtained 96.67% abnormality detection accuracy. To further validate the proposed classification method, we examined the Biomedical Imaging Laboratory dataset which is publicly available online and achieved 94.11% accuracy.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.