“…The hand-crafted approaches use various image descriptors to calculate image features, which are used to distinguish between authentic irises and artifacts typically through the use of Support Vector Machine classifiers. Popular techniques used in calculation of PAD-related iris image features are Binarized Statistical Image Features (BSIF) [16], Local Binary Patterns (LBP) [12], Binary Gabor Patterns (BGP) [17], Local Contrast-Phase Descriptor (LCPD) [18], Local Phase Quantization (LPQ) [19], Scale Invariant Descriptor (SID) [20], Scale Invariant Feature Transform (SIFT) and DAISY [21], Locally Uniform Comparison Image Descriptor (LUCID) and CENsus TRansform hISTogram (CENTRIST) [22], Weber Local Descriptor (WLD) [18], Wavelet Packet Transform (WPT) [23] or image quality descriptors proposed by Galbally et al [24]. Instead of "hand-crafting" effective feature extractors, one may also benefit from recently popular data-driven approaches that learn directly from the data how to process and classify iris images to solve the PAD task [21], [25]- [29].…”
The adoption of large-scale iris recognition systems around the world has brought to light the importance of detecting presentation attack images (textured contact lenses and printouts). This work presents a new approach in iris Presentation Attack Detection (PAD), by exploring combinations of Convolutional Neural Networks (CNNs) and transformed input spaces through binarized statistical image features (BSIF). Our method combines lightweight CNNs to classify multiple BSIF views of the input image. Following explorations on complementary input spaces leading to more discriminative features to detect presentation attacks, we also propose an algorithm to select the best (and most discriminative) predictors for the task at hand. An ensemble of predictors makes use of their expected individual performances to aggregate their results into a final prediction. Results show that this technique improves on the current state of the art in iris PAD, outperforming the winner of LivDet-Iris 2017 competition both for intra-and cross-dataset scenarios, and illustrating the very difficult nature of the cross-dataset scenario.
“…The hand-crafted approaches use various image descriptors to calculate image features, which are used to distinguish between authentic irises and artifacts typically through the use of Support Vector Machine classifiers. Popular techniques used in calculation of PAD-related iris image features are Binarized Statistical Image Features (BSIF) [16], Local Binary Patterns (LBP) [12], Binary Gabor Patterns (BGP) [17], Local Contrast-Phase Descriptor (LCPD) [18], Local Phase Quantization (LPQ) [19], Scale Invariant Descriptor (SID) [20], Scale Invariant Feature Transform (SIFT) and DAISY [21], Locally Uniform Comparison Image Descriptor (LUCID) and CENsus TRansform hISTogram (CENTRIST) [22], Weber Local Descriptor (WLD) [18], Wavelet Packet Transform (WPT) [23] or image quality descriptors proposed by Galbally et al [24]. Instead of "hand-crafting" effective feature extractors, one may also benefit from recently popular data-driven approaches that learn directly from the data how to process and classify iris images to solve the PAD task [21], [25]- [29].…”
The adoption of large-scale iris recognition systems around the world has brought to light the importance of detecting presentation attack images (textured contact lenses and printouts). This work presents a new approach in iris Presentation Attack Detection (PAD), by exploring combinations of Convolutional Neural Networks (CNNs) and transformed input spaces through binarized statistical image features (BSIF). Our method combines lightweight CNNs to classify multiple BSIF views of the input image. Following explorations on complementary input spaces leading to more discriminative features to detect presentation attacks, we also propose an algorithm to select the best (and most discriminative) predictors for the task at hand. An ensemble of predictors makes use of their expected individual performances to aggregate their results into a final prediction. Results show that this technique improves on the current state of the art in iris PAD, outperforming the winner of LivDet-Iris 2017 competition both for intra-and cross-dataset scenarios, and illustrating the very difficult nature of the cross-dataset scenario.
“…Existing methods include detection of fake representations of irises (paper printouts, textured contact lenses, prosthetic eyes, displays), or a non-conformant use of an actual eye. The most popular techniques used in iris PAD use various image texture descriptors (Binarized Statistical Image Features (BSIF) [16], Local Binary Patterns (LBP) [6], Binary Gabor Patterns (BGP) [17], Local Contrast-Phase Descriptor (LCPD) [11], Local Phase Quantization (LPQ) [28], Scale Invariant Descriptor (SID) [10], Scale Invariant Feature Transform (SIFT) and DAISY [21], Weber Local Descriptor (WLD) [11], or Wavelet Packet Transform (WPT) [2]), image quality descriptors [8], or deep-learning-based techniques [19,12,21,23]. If hardware adaptations are possible one may consider multi-spectral analysis [31] or estimation of three-dimensional iris features [20,13] for PAD.…”
Section: Related Work 21 Presentation Attack Detection In Iris Recomentioning
This paper presents a deep-learning-based method for iris presentation attack detection (PAD) when iris images are obtained from deceased people. Post-mortem iris recognition, despite being a potentially useful method that could aid forensic identification, can also pose challenges when used inappropriately, i.e. utilizing a dead organ of a person in an unauthorized way. Our approach is based on the VGG-16 architecture fine-tuned with a database of 574 post-mortem, near-infrared iris images from the Warsaw-BioBase-PostMortem-Iris-v1 database, complemented by a dataset of 256 images of live irises, collected within the scope of this study. Experiments described in this paper show that our approach is able to correctly classify iris images as either representing a live or a dead eye in almost 99% of the trials, averaged over 20 subject-disjoint, train/test splits. We also show that the post-mortem iris detection accuracy increases as time since death elapses, and that we are able to construct a classification system with APCER=0%@BPCER≈1% (Attack Presentation and Bona Fide Presentation Classification Error Rates, respectively) when only post-mortem samples collected at least 16 hours post-mortem are considered. Since acquisitions of ante-and post-mortem samples differ significantly, we applied countermeasures to minimize bias in our classification methodology caused by image properties that are not related to the PAD. This included using the same iris sensor in collection of ante-and post-mortem samples, and analysis of class activation maps to ensure that discriminant iris regions utilized by our classifier are related to properties of the eye, and not to those of the acquisition protocol. This paper offers the first known to us PAD method in a post-mortem setting, together with an explanation of the decisions made by the convolutional neural network. Along with the paper we offer source codes, weights of the trained network, and a dataset of live iris images to facilitate reproducibility and further research.
“…Doyel et al [10] ensembled 14 classifiers together to conduct three class lens detection problem and achieved an accuracy of 97%. Lovish et al [12] proposed a method based on Local Phase Quantization (LPQ) and Binary Gabor Patterns (BGP) for detecting cosmetic lens. Lee et al [11] proposed a hardware based solution to distinguish between a real and fabricated iris image based on purkinje image formation.…”
Iris serves as one of the best biometric modality owing to its complex, unique and stable structure. However, it can still be spoofed using fabricated eyeballs and contact lens. Accurate identification of contact lens is must for reliable performance of any biometric authentication system based on this modality. In this paper, we present a novel approach for detecting contact lens using a Generalized Hierarchically tuned Contact Lens detection Network (GHCLNet) . We have proposed hierarchical architecture for three class oculus classification namely: no lens, soft lens and cosmetic lens. Our network architecture is inspired by ResNet-50 model. This network works on raw input iris images without any pre-processing and segmentation requirement and this is one of its prodigious strength. We have performed extensive experimentation on two publicly available data-sets namely: 1)IIIT-D 2)ND and on IIT-K data-set (not publicly available) to ensure the generalizability of our network. The proposed architecture results are quite promising and outperforms the available state-of-the-art lens detection algorithms.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.