The measurement of the vessel pattern in fingers is a superior method for identifying individuals owing to its convenience and the security it offers. We introduce in this paper a new perspective to accomplish finger vein recognition. This method, which regards deformations as discriminative information, is distinct from existing methods that attempt to prevent the influence of deformations. The proposed technique is based on the observation that regular deformation, which corresponds to a posture change, can only exist in genuine vein patterns. In terms of methodology, we incorporate optimized matching to generate pixelbased 2D displacements that correspond to deformations. The texture of uniformity extracted from the displacement fields is taken as the final matching score. Evaluated on two publicly available databases, PolyU and SDU-MLA, extensive experiments demonstrated that the discriminability of the new feature derived from deformations is preferable. The equal error rate (EER) achieved is the lowest compared to that of state-of-the-art techniques.
Finger vein patterns are considered as one of the most promising biometric authentication methods for its security and convenience. Most of the current available finger vein recognition methods utilize features from a segmented blood vessel network. As an improperly segmented network may degrade the recognition accuracy, binary pattern based methods are proposed, such as Local Binary Pattern (LBP), Local Derivative Pattern (LDP) and Local Line Binary Pattern (LLBP). However, the rich directional information hidden in the finger vein pattern has not been fully exploited by the existing local patterns. Inspired by the Webber Local Descriptor (WLD), this paper represents a new direction based local descriptor called Local Directional Code (LDC) and applies it to finger vein recognition. In LDC, the local gradient orientation information is coded as an octonary decimal number. Experimental results show that the proposed method using LDC achieves better performance than methods using LLBP.
Finger veins are a promising biometric pattern for personalized identification in terms of their advantages over existing biometrics. Based on the spatial pyramid representation and the combination of more effective information such as gray, texture and shape, this paper proposes a simple but powerful feature, called Pyramid Histograms of Gray, Texture and Orientation Gradients (PHGTOG). For a finger vein image, PHGTOG can reflect the global spatial layout and local details of gray, texture and shape. To further improve the recognition performance and reduce the computational complexity, we select a personalized subset of features from PHGTOG for each subject by using the sparse weight vector, which is trained by using LASSO and called PFS-PHGTOG. We conduct extensive experiments to demonstrate the promise of the PHGTOG and PFS-PHGTOG, experimental results on our databases show that PHGTOG outperforms the other existing features. Moreover, PFS-PHGTOG can further boost the performance in comparison with PHGTOG.
Abstract:Retinal identification based on retinal vasculatures in the retina provides the most secure and accurate means of authentication among biometrics and has primarily been used in combination with access control systems at high security facilities. Recently, there has been much interest in retina identification. As digital retina images always suffer from deformations, the Scale Invariant Feature Transform (SIFT), which is known for its distinctiveness and invariance for scale and rotation, has been introduced to retinal based identification. However, some shortcomings like the difficulty of feature extraction and mismatching exist in SIFT-based identification. To solve these problems, a novel preprocessing method based on the Improved Circular Gabor Transform (ICGF) is proposed. After further processing by the iterated spatial anisotropic smooth method, the number of uninformative SIFT keypoints is decreased dramatically. Tested on the VARIA and eight simulated retina databases combining rotation and scaling, the developed method presents promising results and shows robustness to rotations and scale changes.
BackgroundMammography is one of the most popular tools for early detection of breast cancer. Contour of breast mass in mammography is very important information to distinguish benign and malignant mass. Contour of benign mass is smooth and round or oval, while malignant mass has irregular shape and spiculated contour. Several studies have shown that 1D signature translated from 2D contour can describe the contour features well.MethodsIn this paper, we propose a new method to translate 2D contour of breast mass in mammography into 1D signature. The method can describe not only the contour features but also the regularity of breast mass. Then we segment the whole 1D signature into different subsections. We extract four local features including a new contour descriptor from the subsections. The new contour descriptor is root mean square (RMS) slope. It can describe the roughness of the contour. KNN, SVM and ANN classifier are used to classify benign breast mass and malignant mass.ResultsThe proposed method is tested on a set with 323 contours including 143 benign masses and 180 malignant ones from digital database of screening mammography (DDSM). The best accuracy of classification is 99.66% using the feature of root mean square slope with SVM classifier.ConclusionThe performance of the proposed method is better than traditional method. In addition, RMS slope is an effective feature comparable to most of the existing features.Electronic supplementary materialThe online version of this article (doi:10.1186/s12938-017-0332-0) contains supplementary material, which is available to authorized users.
Iris segmentation plays an important role in the iris recognition system, and the accurate segmentation of iris can lay a good foundation for the follow-up work of iris recognition and can improve greatly the efficiency of iris recognition. We proposed four new feasible network schemes, and the best network model fully dilated convolution combining U-Net (FD-UNet) is obtained by training and testing on the same datasets. The FD-UNet uses dilated convolution instead of original convolution to extract more global features so that the details of images can be processed better. The proposed method is tested in the near-infrared illumination iris datasets (CASIA-iris-interval-v4.0 and ND-IRIS-0405) and the visible light illumination iris dataset (UBIRIS.v2). The f1 scores of our model on the CASIA-iris-interval-v4.0, ND-IRIS-0405, and UBIRIS.v2 datasets reached 97.36%, 96.74%, and 94.81%, respectively. The experimental results show that our network model improves the accuracy and reduces the error rate, which performs well on both near-infrared illumination and visible light illumination iris datasets with good robustness.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.