“…Ajita and Massimo [9] in 2009 addressed the feature level fusion of multi-modal and multi-unit sources of information by proposing approach computes the SIFT features from both biometric sources.for each biometric trait feature selection on the extracted SIFT features was performed by spatial sampling then the features are concatenated to form a single vector using serial fusion. L.Latha and S.Thangasamy [10] in 2010 they have used left and right irises and retinal features, and after matching process the scores are combined using weighted sum rule. To validate their approach, experiments were conducted on the iris and retina images obtained from CASIA and VARIA database respectively.…”
Section: Related Workmentioning
confidence: 99%
“…With the orientation image and the adjusted gamma image as the input, the local maxima are suppressed. Then Circular Hough Transform is used to detect the iris and pupil boundaries and reveal both radius and center coordinates [10].…”
Section: Iris Segmentationmentioning
confidence: 99%
“…The different stages of our multimodal biometric system are being shown in figure (10), these stages are executed as follow:…”
Single biometric systems suffer from many challenges such as noisy data, non-universality and spoof attacks. Multimodal biometric systems can solve these limitations effectively by using two or more individual modalities. In this paper fusion of fingerprint, iris and face traits are used at score level in order to improve the accuracy of the system. Scores which obtained from the classifiers are normalized first using minmax normalization. Then sum, product and weighted sum rules are used to get fusion. Experimental results show that multimodal biometric systems outperform unimodal biometric systems and weighted sum rule gives the best results comparing with sum or product method.
“…Ajita and Massimo [9] in 2009 addressed the feature level fusion of multi-modal and multi-unit sources of information by proposing approach computes the SIFT features from both biometric sources.for each biometric trait feature selection on the extracted SIFT features was performed by spatial sampling then the features are concatenated to form a single vector using serial fusion. L.Latha and S.Thangasamy [10] in 2010 they have used left and right irises and retinal features, and after matching process the scores are combined using weighted sum rule. To validate their approach, experiments were conducted on the iris and retina images obtained from CASIA and VARIA database respectively.…”
Section: Related Workmentioning
confidence: 99%
“…With the orientation image and the adjusted gamma image as the input, the local maxima are suppressed. Then Circular Hough Transform is used to detect the iris and pupil boundaries and reveal both radius and center coordinates [10].…”
Section: Iris Segmentationmentioning
confidence: 99%
“…The different stages of our multimodal biometric system are being shown in figure (10), these stages are executed as follow:…”
Single biometric systems suffer from many challenges such as noisy data, non-universality and spoof attacks. Multimodal biometric systems can solve these limitations effectively by using two or more individual modalities. In this paper fusion of fingerprint, iris and face traits are used at score level in order to improve the accuracy of the system. Scores which obtained from the classifiers are normalized first using minmax normalization. Then sum, product and weighted sum rules are used to get fusion. Experimental results show that multimodal biometric systems outperform unimodal biometric systems and weighted sum rule gives the best results comparing with sum or product method.
“…Furthermore, another reason for choosing the iris and retina as biometric traits is the high level of uniqueness, performance, universality, and circumvention [12]. In the literature, only a few publications propose a multimodal authentication system using iris and retina; the most representative ones are [12,18,19], even though they relied upon the frequency domain alone. Furthermore, the aforementioned work involved vessel-based matching by using feature points, that is minutiae points.…”
The recent developments of information technologies, and the consequent need for access to distributed services and resources, require robust and reliable authentication systems. Biometric systems can guarantee high levels of security and multimodal techniques, which combine two or more biometric traits, warranting constraints that are more stringent during the access phases. This work proposes a novel multimodal biometric system based on iris and retina combination in the spatial domain. The proposed solution follows the alignment and recognition approach commonly adopted in computational linguistics and bioinformatics; in particular, features are extracted separately for iris and retina, and the fusion is obtained relying upon the comparison score via the Levenshtein distance. We evaluated our approach by testing several combinations of publicly available biometric databases, namely one for retina images and three for iris images. To provide comprehensive results, detection error trade‐off‐based metrics, as well as statistical analyses for assessing the authentication performance, were considered. The best achieved False Acceptation Rate and False Rejection Rate indices were and 3.33%, respectively, for the multimodal retina‐iris biometric approach that overall outperformed the unimodal systems. These results draw the potential of the proposed approach as a multimodal authentication framework using multiple static biometric traits.
“…Vessel cali bre was added in [6] as an additional feature to the template described in [5]. Templates based on feature points (posi tions of vessel branchings and crossovers) have been tested recently in [7,8,9,10,11,12] and the effect of matching 1 such features in fovea-centred versus optical disc-centred images has been studied in [13]. In this last paper, the ori entation of the principal branch at a vessel branch point is included for improved matching robustness.…”
We represent the retina vessel pattern as a spatial re lational graph, and match features using error-correcting graph matching. We study the distinctiveness of the nodes (branching and crossing points) compared with that of the edges and other substructures (nodes of degree k, paths of length k). On a training set from the VARIA database, we show that as well as nodes, three other types of graph sub structure completely or almost completely separate genuine from imposter comparisons. We show that combining nodes and edges can improve the separation distance. We identify two retina graph statistics, the edge-to-node ratio and the variance of the degree distribution, that have low correla tion with node match score.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.