This paper proposes a low-cost personal identification system that uses the combined palm vein and palmprint biometric features. The system consists of near-infrared and visible light-emitting diode (LED) arrays, a low-cost visual sensor, a Xilinx chip, and other components. A real-time image quality assessment (IQA) method for the combined palm vein and palmprint biometric features is also proposed. Two types of the LED with central frequency spectra of 890 and 680 nm are used to capture the palm vein and palmprint, respectively. The adaptive feedback control of the diode brightness is in accordance with the image quality assessed by the combined 2D entropy and local 2D entropy. The palm vein and palmprint images are acquired nearly simultaneously, and each acquired image undergoes a few preprocessing steps for extraction of the vein and print patterns. We use an image-level wavelet-based fusion strategy to reduce image storage requirement for the embedded platform and implement a complex wavelet-based fusion strategy for the PC platform. A deep scattering convolutional network is applied for extracting the features of the fused images, and a multi-class support vector machine is used for training and recognition. Characteristics of some vision-based personal identification systems are discussed. The proposed real-time IQA method with fusion strategy and feature extraction algorithm in our prototype system has substantially less operational requirements than that of the previous fusion strategies. It also demands less memory and yields lower equal error rate than the classical feature extraction algorithms. INDEX TERMS Biometrics, low-level image analysis, feature extraction and description, image circuits and architectures, hardware/software co-design.
Sweat pore, one of the level 3 features of fingerprint, has attracted much attention in fingerprint recognition. Traditional sweat pores on surface fingerprint are unclear or blurred when fingers are stained or damaged. Subcutaneous sweat pores, as cross section of the sweat glands, are resistant to external interferences. With 3D fingertip information measured by optical coherence tomography (OCT), the subcutaneous sweat pore estimation from OCT volume data is investigated. First, an adaptive subcutaneous pore image reconstruction method is proposed. It utilizes the skin surface and viable epidermis junction as reference and realizes depth-adaptive pore image reconstruction. Second, a dilated U-Net combining the U-Net with dilated convolution is proposed for subcutaneous sweat pore extraction, which can prevent information loss of sweat pores caused by downsampling. To the best knowledge, it is the first time that subcutaneous sweat pore extraction is investigated and proposed. Experiments on subcutaneous pore image reconstruction and sweat pore extraction are both conducted. The qualitative and quantitative results show that the proposed adaptive method performs better in subcutaneous pore image reconstruction compared with the fix-depth method, and the dilated U-Net outperforms other methods on subcutaneous sweat pore extraction.This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.