The human body contains identity information that can be used for the person recognition (verification/recognition) problem. In this paper, we propose a person recognition method using the information extracted from body images. Our research is novel in the following three ways compared to previous studies. First, we use the images of human body for recognizing individuals. To overcome the limitations of previous studies on body-based person recognition that use only visible light images for recognition, we use human body images captured by two different kinds of camera, including a visible light camera and a thermal camera. The use of two different kinds of body image helps us to reduce the effects of noise, background, and variation in the appearance of a human body. Second, we apply a state-of-the art method, called convolutional neural network (CNN) among various available methods, for image features extraction in order to overcome the limitations of traditional hand-designed image feature extraction methods. Finally, with the extracted image features from body images, the recognition task is performed by measuring the distance between the input and enrolled samples. The experimental results show that the proposed method is efficient for enhancing recognition accuracy compared to systems that use only visible light or thermal images of the human body.
With recent increases in security requirements, biometrics such as fingerprints, faces, and irises have been widely used in many recognition applications including door access control, personal authentication for computers, Internet banking, automatic teller machines, and border-crossing controls. Finger vein recognition uses the unique patterns of finger veins to identify individuals at a high level of accuracy. This article proposes a new finger vein recognition method using minutia-based alignment and local binary pattern (LBP)-based feature extraction. Our study makes three novelties compared to previous works. First, we use minutia points such as bifurcation and ending points of the finger vein region for image alignment. Second, instead of using the whole finger vein region, we use several extracted minutia points and a simple affine transform for alignment, which can be performed at fast computational speed. Third, after aligning the finger vein image based on minutia points, we extract a unique finger vein code using a LBP, which reduces false rejection error and thus the equal error rate (EER) significantly. Our resulting EER was 0.081% with a total processing time of 118.6 ms.
The recent advancements in computer vision have opened new horizons for deploying biometric recognition algorithms in mobile and handheld devices. Similarly, iris recognition is now much needed in unconstraint scenarios with accuracy. These environments make the acquired iris image exhibit occlusion, low resolution, blur, unusual glint, ghost effect, and off-angles. The prevailing segmentation algorithms cannot cope with these constraints. In addition, owing to the unavailability of near-infrared (NIR) light, iris recognition in visible light environment makes the iris segmentation challenging with the noise of visible light. Deep learning with convolutional neural networks (CNN) has brought a considerable breakthrough in various applications. To address the iris segmentation issues in challenging situations by visible light and near-infrared light camera sensors, this paper proposes a densely connected fully convolutional network (IrisDenseNet), which can determine the true iris boundary even with inferior-quality images by using better information gradient flow between the dense blocks. In the experiments conducted, five datasets of visible light and NIR environments were used. For visible light environment, noisy iris challenge evaluation part-II (NICE-II selected from UBIRIS.v2 database) and mobile iris challenge evaluation (MICHE-I) datasets were used. For NIR environment, the institute of automation, Chinese academy of sciences (CASIA) v4.0 interval, CASIA v4.0 distance, and IIT Delhi v1.0 iris datasets were used. Experimental results showed the optimal segmentation of the proposed IrisDenseNet and its excellent performance over existing algorithms for all five datasets.
Although face recognition systems have wide application, they are vulnerable to presentation attack samples (fake samples). Therefore, a presentation attack detection (PAD) method is required to enhance the security level of face recognition systems. Most of the previously proposed PAD methods for face recognition systems have focused on using handcrafted image features, which are designed by expert knowledge of designers, such as Gabor filter, local binary pattern (LBP), local ternary pattern (LTP), and histogram of oriented gradients (HOG). As a result, the extracted features reflect limited aspects of the problem, yielding a detection accuracy that is low and varies with the characteristics of presentation attack face images. The deep learning method has been developed in the computer vision research community, which is proven to be suitable for automatically training a feature extractor that can be used to enhance the ability of handcrafted features. To overcome the limitations of previously proposed PAD methods, we propose a new PAD method that uses a combination of deep and handcrafted features extracted from the images by visible-light camera sensor. Our proposed method uses the convolutional neural network (CNN) method to extract deep image features and the multi-level local binary pattern (MLBP) method to extract skin detail features from face images to discriminate the real and presentation attack face images. By combining the two types of image features, we form a new type of image features, called hybrid features, which has stronger discrimination ability than single image features. Finally, we use the support vector machine (SVM) method to classify the image features into real or presentation attack class. Our experimental results indicate that our proposed method outperforms previous PAD methods by yielding the smallest error rates on the same image databases.
Image-based computer-aided diagnosis (CAD) systems have been developed to assist doctors in the diagnosis of thyroid cancer using ultrasound thyroid images. However, the performance of these systems is strongly dependent on the selection of detection and classification methods. Although there are previous researches on this topic, there is still room for enhancement of the classification accuracy of the existing methods. To address this issue, we propose an artificial intelligence-based method for enhancing the performance of the thyroid nodule classification system. Thus, we extract image features from ultrasound thyroid images in two domains: spatial domain based on deep learning, and frequency domain based on Fast Fourier transform (FFT). Using the extracted features, we perform a cascade classifier scheme for classifying the input thyroid images into either benign (negative) or malign (positive) cases. Through expensive experiments using a public dataset, the thyroid digital image database (TDID) dataset, we show that our proposed method outperforms the state-of-the-art methods and produces up-to-date classification results for the thyroid nodule classification problem.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.