Biometrics using finger-veins is a recognition method based on the shape of veins in fingers, and it has the advantage of difficulty to be forged. However, a shade is inevitably produced due to the bones and fingernails, and a change in illumination occurs when acquiring the images of finger-veins. Previous studies have conducted finger-vein recognition using a single-type texture image or finger-vein segmented image (shape image). A texture image provides numerous features, but it is vulnerable to the changes in illumination during recognition and contains noises in regions other than the finger-vein region. A shape image is less affected by noises; however, the recognition accuracy is significantly reduced due to fewer features available and mis-segmented regions caused by shades. In this study, therefore, rough finger-vein regions in an image are detected to reduce the effect of mis-segmented regions, to complement the drawbacks of shape image-based finger-vein recognition. Furthermore, score-level fusion is performed for two output scores of deep convolutional neural network extracted from the texture and shape images, which can reduce the sensitivity to noise, while diverse features provided in the texture image are used efficiently. Two open databases, the Shandong University homologous multi-modal traits finger-vein database and Hong Kong Polytech University finger image database, are used for experiments, and the proposed method shows better recognition performance than the state-of-the-art method.INDEX TERMS Finger-vein recognition, shape and texture images of finger-vein, deep CNN, score-level fusion.
Among the existing biometrics methods, finger-vein recognition is beneficial because finger-veins patterns are locate under the skin and thus difficult to forge. Moreover, user convenience is high because non-invasive image capturing devices are used for recognition. In real environments, however, optical blur can occur while capturing finger-vein images du to both skin scattering blur caused by light scattering in the skin layer and lens focus mismatch caused by finger movement. The blurred images generated in this manner can cause severe performance degradation for finger-vein recognition. The majority of the previous studies addressed the restoration method o skin scattering blurred images; however, only limited studies have addressed the restoration of optically blurred images. Even the previous studies on the restoration of optical blur restoration have performed restoration based on the estimation of the accurate point spread function (PSF) for a specific image-capturing device. Thus, it is difficult to apply these methods to finger-vein images acquired by different devices. To address this problem, this paperproposes a new method for restoring optically blurred finger-vei images using a modified conditional generative adversarial network (conditional GAN) and recognizing the restored finger-vein images using a deep convolutional neural network (CNN). The results of the experiment performed using two open databases, the Shandong University homologous multimodal traits (SDUMLA-HMT) finger-vein database and Hong Kong Polytechnic University finger-image database (version 1) confirmed that the proposed method outperforms the existing methods. INDEX TERMS Finger-vein recognition, optical blur image restoration, modified conditional GAN, CNN.
Accurate nuclear segmentation in histopathology images plays a key role in digital pathology. It is considered a prerequisite for the determination of cell phenotype, nuclear morphometrics, cell classification, and the grading and prognosis of cancer. However, it is a very challenging task because of the different types of nuclei, large intraclass variations, and diverse cell morphologies. Consequently, the manual inspection of such images under high-resolution microscopes is tedious and time-consuming. Alternatively, artificial intelligence (AI)-based automated techniques, which are fast and robust, and require less human effort, can be used. Recently, several AI-based nuclear segmentation techniques have been proposed. They have shown a significant performance improvement for this task, but there is room for further improvement. Thus, we propose an AI-based nuclear segmentation technique in which we adopt a new nuclear segmentation network empowered by residual skip connections to address this issue. Experiments were performed on two publicly available datasets: (1) The Cancer Genome Atlas (TCGA), and (2) Triple-Negative Breast Cancer (TNBC). The results show that our proposed technique achieves an aggregated Jaccard index (AJI) of 0.6794, Dice coefficient of 0.8084, and F1-measure of 0.8547 on TCGA dataset, and an AJI of 0.7332, Dice coefficient of 0.8441, precision of 0.8352, recall of 0.8306, and F1-measure of 0.8329 on the TNBC dataset. These values are higher than those of the state-of-the-art methods.
Although face-based biometric recognition systems have been widely used in many applications, this type of recognition method is still vulnerable to presentation attacks, which use fake samples to deceive the recognition system. To overcome this problem, presentation attack detection (PAD) methods for face recognition systems (face-PAD), which aim to classify real and presentation attack face images before performing a recognition task, have been developed. However, the performance of PAD systems is limited and biased due to the lack of presentation attack images for training PAD systems. In this paper, we propose a method for artificially generating presentation attack face images by learning the characteristics of real and presentation attack images using a few captured images. As a result, our proposed method helps save time in collecting presentation attack samples for training PAD systems and possibly enhance the performance of PAD systems. Our study is the first attempt to generate PA face images for PAD system based on CycleGAN network, a deep-learning-based framework for image generation. In addition, we propose a new measurement method to evaluate the quality of generated PA images based on a face-PAD system. Through experiments with two public datasets (CASIA and Replay-mobile), we show that the generated face images can capture the characteristics of presentation attack images, making them usable as captured presentation attack samples for PAD system training.
The conventional finger-vein recognition system is trained using one type of database and entails the serious problem of performance degradation when tested with different types of databases. This degradation is caused by changes in image characteristics due to variable factors such as position of camera, finger, and lighting. Therefore, each database has varying characteristics despite the same finger-vein modality. However, previous researches on improving the recognition accuracy of unobserved or heterogeneous databases is lacking. To overcome this problem, we propose a method to improve the finger-vein recognition accuracy using domain adaptation between heterogeneous databases using cycle-consistent adversarial networks (CycleGAN), which enhances the recognition accuracy of unobserved data. The experiments were performed with two open databases—Shandong University homologous multi-modal traits finger-vein database (SDUMLA-HMT-DB) and Hong Kong Polytech University finger-image database (HKPolyU-DB). They showed that the equal error rate (EER) of finger-vein recognition was 0.85% in case of training with SDUMLA-HMT-DB and testing with HKPolyU-DB, which had an improvement of 33.1% compared to the second best method. The EER was 3.4% in case of training with HKPolyU-DB and testing with SDUMLA-HMT-DB, which also had an improvement of 4.8% compared to the second best method.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.