“…They additional found that shape of the eyebrow for images captured in visible range , and shape of the eye for images captured in NIR range are the most discriminating features in periocular region-based authentication systems. Rattani et al [34] executed a strategy for gender classification utilizing periocular region on VISOB dataset. They used HOG feature descriptor for feature extraction and Multi-Layer Perceptron for classification and obtained a remarkable 90% recognition accuracy.…”
Section: Histogram Of Oriented Gradients (Hog)mentioning
Features from face and iris to authenticate individuals are the most popular biometric traits. Still inclusion of non-ideal images (such as images with variation in pose, tilting head, subjects wearing spectacles and variation in capturing device distance) can degrade the recognition accuracy of any biometric systems. For this scenario, periocular region (nearby region around the eye) based biometric authentication is an emerging method which is used by researchers now a days to improve the recognition accuracy specifically for non-ideal images and when users are non- cooperative. In this context, our key insight is to develop a system considering periocular region as a biometric trait and aim to evaluate its effectiveness for classification of non-ideal images in two different non-ideal scenarios 1) images with different pose variation and 2) images captured from varying camera standoff distance. In this proposed work we have evaluated three different handcrafted feature descriptors 1) Histogram of Oriented Gradients 2) Bag of Feature model and 3)Local Binary Patterns on two different databases 1) ORL face database and 2) UBIPr periocular image database and found that HOG feature descriptor show superior performance as compare to BOF and LBP feature descriptor for periocular region based biometric authentication systems.
“…They additional found that shape of the eyebrow for images captured in visible range , and shape of the eye for images captured in NIR range are the most discriminating features in periocular region-based authentication systems. Rattani et al [34] executed a strategy for gender classification utilizing periocular region on VISOB dataset. They used HOG feature descriptor for feature extraction and Multi-Layer Perceptron for classification and obtained a remarkable 90% recognition accuracy.…”
Section: Histogram Of Oriented Gradients (Hog)mentioning
Features from face and iris to authenticate individuals are the most popular biometric traits. Still inclusion of non-ideal images (such as images with variation in pose, tilting head, subjects wearing spectacles and variation in capturing device distance) can degrade the recognition accuracy of any biometric systems. For this scenario, periocular region (nearby region around the eye) based biometric authentication is an emerging method which is used by researchers now a days to improve the recognition accuracy specifically for non-ideal images and when users are non- cooperative. In this context, our key insight is to develop a system considering periocular region as a biometric trait and aim to evaluate its effectiveness for classification of non-ideal images in two different non-ideal scenarios 1) images with different pose variation and 2) images captured from varying camera standoff distance. In this proposed work we have evaluated three different handcrafted feature descriptors 1) Histogram of Oriented Gradients 2) Bag of Feature model and 3)Local Binary Patterns on two different databases 1) ORL face database and 2) UBIPr periocular image database and found that HOG feature descriptor show superior performance as compare to BOF and LBP feature descriptor for periocular region based biometric authentication systems.
“…The prediction of attributes from other biometric traits has also been actively studied in the literature (see Table I). There is also related attribute prediction research from visible spectrum ocular images [10], [11], [12] and [13].…”
Section: B Predictable Attributes From Nir Ocular Imagesmentioning
confidence: 99%
“…It should also be noted that there are some gender prediction work using the periocular region in the visible wavelength spectrum in [10], [11], [12] and [13].…”
Section: A Gendermentioning
confidence: 99%
“…Over the 5 random iterations, there were 5656 ± 34 images in the training dataset and 3749 ± 34 images in the test set. 11 1) BioCOP2009 Race Results: The 8-bit BSIF was used in this work as a compromise between prediction accuracy and computational processing time. While 9-bit or 10-bit BSIF may provide slightly better results, the increased requirement of memory and processing time to perform each experiment was quite substantial given the large size of the BioCOP2009 dataset.…”
Section: A Racementioning
confidence: 99%
“…Bobeldyk and Ross [28] showed, for gender prediction using BSIF, that the ocular region provides greater gender prediction accuracy than the iris-only region. We have performed a similar experiment in order to test the prediction accuracy 11 Some subjects may have more images than others of the iris-only and iris-excluded ocular image regions (see Figure 4). The results of these experiments are shown in Table XI.…”
Recent research has explored the possibility of automatically deducing information such as gender, age and race of an individual from their biometric data. While the face modality has been extensively studied in this regard, the iris modality less so. In this paper, we first review the medical literature to establish a biological basis for extracting gender and race cues from the iris. Then, we demonstrate that it is possible to use simple texture descriptors, like BSIF (Binarized Statistical Image Feature) and LBP (Local Binary Patterns), to extract gender and race attributes from an NIR ocular image used in a typical iris recognition system. The proposed method predicts gender and race from a single eye image with an accuracy of 86% and 90%, respectively. In addition, the following analysis are conducted: (a) the role of different parts of the ocular region on attribute prediction; (b) the influence of gender on race prediction, and vice-versa; (c) the impact of eye color on gender and race prediction; (d) the impact of image blur on gender and race prediction; (e) the generalizability of the method across different datasets; and (f) the consistency of prediction performance across the left and right eyes.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.