One of the first steps in iris recognition is isolating (or segmenting) the iris from an image of the subject's eye area. This paper investigates new approaches for locating the pupil (inner) and limbic (outer) boundaries of the iris, namely a binary morphology and "center of mass" technique for the pupil boundary, and a local statistics approach for the limbic boundary. The methodology and results are presented using images from the University of Bath iris database. Index Terms -Image segmentation, image processing, image edge analysis.
The authors present a new algorithm for iris recognition. Segmentation is based on local statistics, and after segmentation, the image is subjected to contrast-limited, adaptive histogram equalization. Feature extraction is then conducted using two directional filters (vertically and horizontally oriented). The presence (or absence) of ridges and their dominant directions are determined, based on maximum directional filter response. Templates are compared using fractional Hamming distance as a metric for a match/non match determination. This Ridge-Energy-Direction (RED) algorithm reduces the effects of illumination, since only direction is used. Results are presented that utilize four iris databases, and some comparison of recognition performance against a Daugman-based algorithm is provided.
One of the basic challenges to robust iris recognition is iris segmentation. This paper proposes the use of a feature saliency algorithm and an artificial neural network to perform iris segmentation. Many current Iris segmentation approaches assume a circular shape for the iris boundary if the iris is directly facing the camera. Occlusion by the eyelid can cause the visible boundary to have an irregular shape. In our approach an artificial neural network is used to statistically classify each pixel of an iris image with no assumption of circularity. First, a feed-forward feature saliency technique is performed to determine which combination of features contains the greatest discriminatory information. Image brightness, local moments, local orientated energy measurements and relative pixel location are evaluated for saliency. Next, the set of salient features is used as the input to a multi-layer perceptron feed-forward artificial neural network trained for classification. Testing showed 96.46 percent accuracy in determining which pixels in an image of the eye were iris pixels. For occluded images, the iris masks created by the neural network were consistently more accurate than the truth mask created using the circular iris boundary assumption. Post-processing to retain the largest contiguous piece in the iris mask increased the accuracy to 98.2 percent.
Iris recognition is an increasingly popular biometric due to its relative ease of use and high reliability. However, commercially available systems typically require on-axis images for recognition, meaning the subject is looking in the direction of the camera. The feasibility of using off-axis images is an important area of investigation for iris systems with more flexible user interfaces. The authors present an analysis of two image transform processes for offaxis images and an analysis of the utility of correcting for cornea refraction effects. The performance is assessed on the U.S. Naval Academy iris image database using the Ridge Energy Direction recognition algorithm developed by the authors, as well as with a commercial implementation of the Daugman algorithm.
The iris is currently believed to be the most accurate biometric for human identification. The majority of fielded iris identification systems are based on the highly accurate wavelet-based Daugman algorithm. Another promising recognition algorithm by Ives et al uses Directional Energy features to create the iris template. Both algorithms use Hamming distance to compare a new template to a stored database. Hamming distance is an extremely fast computation, but weights all regions of the iris equally. Work from multiple authors has shown that different regions of the iris contain varying levels of discriminatory information. This research evaluates four post-processing similarity metrics for accuracy impacts on the Directional Energy and wavelets based algorithms. Each metric builds on the Hamming distance method in an attempt to use the template information in a more salient manner. A similarity metric extracted from the output stage of a feed-forward multi-layer perceptron artificial neural network demonstrated the most promise. Accuracy tables and ROC curves of tests performed on the publicly available Chinese Academy of Sciences Institute of Automation database show that the neural network based distance achieves greater accuracy than Hamming distance at every operating point, while adding less than one percent computational overhead.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.