Abstract:The increasing demand to develop a palmprint biometric system with a low-error rate has prompted scientists to use multispectral imaging to overcome the limits of the techniques that act in visible light. In order to improve the accuracy of multispectral palmprint recognition, we explore two level fusions: pixel and the feature level fusion approaches. The former is based on a maximum selection rule, which combines discriminating information from different spectral bands of discrete wavelet transform of multis… Show more
“…Image fusion aims to combine the complementary information of multisource images and make the fused image more understandable and purposeful. For multispectral palmprint recognition [ 9 – 12 ], the task of image fusion is to reserve the useful features and remove the confusing identity information in each fusion component so that the images can be separated perfectly in the fusion space. For this purpose, an improved weighted Fisher criterion is applied to the BIMFs extracted from multispectral images.…”
“…In addition, traditional methods obtain features from a single spectral band and consequently cannot achieve enough discriminative information of identities. In recent researches, there is a growing trend to use multispectral images instead of exploiting a single spectral image to improve the accuracy of a palmprint recognition system [ 9 – 12 ]. Images are captured at Blue, Green, Red and Near-infrared (NIR) spectral bands respectively, each of which commonly highlights different specific and complementary palm features.…”
Section: Introductionmentioning
confidence: 99%
“…Qualitative analysis demonstrated that the CT based image fusion could achieve a higher recognition accuracy. Besides, some other innovative methods, such as nonsubsampled contourlet transform (NSCT) [ 14 , 15 ], discrete wavelet transform (DWT) [ 11 , 12 , 16 ], shift-invariant digital wavelet transform (SIDWT) [ 17 , 18 ] and digital shearlet transform (DST) [ 19 , 20 ], were widely and successfully used in multispectral palmprint image fusion. Alternatively, in the case of fusion at matching score level, palmprint features are extracted from different spectral bands separately, followed by a comparator to obtain a matching score.…”
Multispectral palmprint recognition has shown broad prospects for personal identification due to its high accuracy and great stability. In this paper, we develop a novel illumination-invariant multispectral palmprint recognition method. To combine the information from multiple spectral bands, an image-level fusion framework is completed based on a fast and adaptive bidimensional empirical mode decomposition (FABEMD) and a weighted Fisher criterion. The FABEMD technique decomposes the multispectral images into their bidimensional intrinsic mode functions (BIMFs), on which an illumination compensation operation is performed. The weighted Fisher criterion is to construct the fusion coefficients at the decomposition level, making the images be separated correctly in the fusion space. The image fusion framework has shown strong robustness against illumination variation. In addition, a tensor-based extreme learning machine (TELM) mechanism is presented for feature extraction and classification of two-dimensional (2D) images. In general, this method has fast learning speed and satisfying recognition accuracy. Comprehensive experiments conducted on the PolyU multispectral palmprint database illustrate that the proposed method can achieve favorable results. For the testing under ideal illumination, the recognition accuracy is as high as 99.93%, and the result is 99.50% when the lighting condition is unsatisfied.
“…Image fusion aims to combine the complementary information of multisource images and make the fused image more understandable and purposeful. For multispectral palmprint recognition [ 9 – 12 ], the task of image fusion is to reserve the useful features and remove the confusing identity information in each fusion component so that the images can be separated perfectly in the fusion space. For this purpose, an improved weighted Fisher criterion is applied to the BIMFs extracted from multispectral images.…”
“…In addition, traditional methods obtain features from a single spectral band and consequently cannot achieve enough discriminative information of identities. In recent researches, there is a growing trend to use multispectral images instead of exploiting a single spectral image to improve the accuracy of a palmprint recognition system [ 9 – 12 ]. Images are captured at Blue, Green, Red and Near-infrared (NIR) spectral bands respectively, each of which commonly highlights different specific and complementary palm features.…”
Section: Introductionmentioning
confidence: 99%
“…Qualitative analysis demonstrated that the CT based image fusion could achieve a higher recognition accuracy. Besides, some other innovative methods, such as nonsubsampled contourlet transform (NSCT) [ 14 , 15 ], discrete wavelet transform (DWT) [ 11 , 12 , 16 ], shift-invariant digital wavelet transform (SIDWT) [ 17 , 18 ] and digital shearlet transform (DST) [ 19 , 20 ], were widely and successfully used in multispectral palmprint image fusion. Alternatively, in the case of fusion at matching score level, palmprint features are extracted from different spectral bands separately, followed by a comparator to obtain a matching score.…”
Multispectral palmprint recognition has shown broad prospects for personal identification due to its high accuracy and great stability. In this paper, we develop a novel illumination-invariant multispectral palmprint recognition method. To combine the information from multiple spectral bands, an image-level fusion framework is completed based on a fast and adaptive bidimensional empirical mode decomposition (FABEMD) and a weighted Fisher criterion. The FABEMD technique decomposes the multispectral images into their bidimensional intrinsic mode functions (BIMFs), on which an illumination compensation operation is performed. The weighted Fisher criterion is to construct the fusion coefficients at the decomposition level, making the images be separated correctly in the fusion space. The image fusion framework has shown strong robustness against illumination variation. In addition, a tensor-based extreme learning machine (TELM) mechanism is presented for feature extraction and classification of two-dimensional (2D) images. In general, this method has fast learning speed and satisfying recognition accuracy. Comprehensive experiments conducted on the PolyU multispectral palmprint database illustrate that the proposed method can achieve favorable results. For the testing under ideal illumination, the recognition accuracy is as high as 99.93%, and the result is 99.50% when the lighting condition is unsatisfied.
“…A crucial tool for decision-making and treatment procedures in healthcare is now digital medical images [1]- [3]. Determining the area of interest in medical images is one of the most important basic operations in diagnostic systems [4], [5], and the reason is due to the fact that most images contain useless areas [6], [7]. For example, the size of the background and the object differ from one image to another, as the image that is close to the scanner occupies the largest part.…”
Region of interest in the world of medical images, there is a region in each image that contains
unique features that distinguish each image from the other or distinguish one group of the image from another
group. This paper proposes a new method for extracting the region of interest for DEXA images via the K-means
and edge detection. firstly, the noise is reduced by the mean filter then segment the image into two clusters by kmeans, followed by edge detection to identify object boundaries and erosion operation to clarify boundaries and get
the correct coordinates of ROI. The results show that the accuracy of the proposed system is 99%. where 174
images cropped correctly out of 176. the dataset used in this work is 'Osteoporosis DEXA Scans Images of Spine
from Pakistan'.
“…With this in mind, Hassner et al [5] propose a new variety of LBP, called Three-Patch LBP (TP-LBP) and Four-Patch LBP (FP-LBP), to labeled faces in the Wild image set. Also, in their work, Bouchemha et al [6] attempted to extract critical information from multispectral palmprint images using dynamic and statistic features based on ridgelet transform and the parameters of the Gray-Level Co-occurrence Matrix (GLCM). Finally, it should be noted that several methods combine these features to increase the performance of multispectral/hyperspectral palmprint recognition systems.…”
The extraction of distinctive image features is the most important step in pattern recognition systems due to their direct impact on learning the machine commonly used in these types of systems. In this paper, we propose a handcraft feature learning, which based on local distinctive image descriptors, for multispectral palmprint representation and recognition. In the training phase, a projection matrix (hash functions) and a codebook are obtained using the Pixel Difference Vectors (PDVs) of non-overlapping sub-blocks, in order to use it as prior knowledge in the feature extraction step. For the test phase, the extracted PDVs are encoded into binary codes using the projection matrix, then pooled as a histogram feature using the codebook. The experimental results carried out on the CASIA database show that the proposed framework achieves better performances compared to the state-of-the-art methods, in particular the handcrafted ones.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.