2019 IEEE Winter Conference on Applications of Computer Vision (WACV) 2019
DOI: 10.1109/wacv.2019.00101
|View full text |Cite
|
Sign up to set email alerts
|

Predicting Gender From Iris Texture May Be Harder Than It Seems

Abstract: Predicting gender from iris images has been reported by several researchers as an application of machine learning in biometrics. Recent works on this topic have suggested that the preponderance of the gender cues is located in the periocular region rather than in the iris texture itself. This paper focuses on teasing out whether the information for gender prediction is in the texture of the iris stroma, the periocular region, or both. We present a larger dataset for gender from iris, and evaluate gender predic… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

2
4
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 13 publications
(22 citation statements)
references
References 14 publications
2
4
0
Order By: Relevance
“…The same observation is noted for other CNN models used in the study. In [14], Kuehlkamp et al suggest that discriminative power of the iris texture for gender is weak and that the gender-related information is primarily in the periocular region which is supported by our observations as well.…”
Section: Cnnsupporting
confidence: 90%
See 1 more Smart Citation
“…The same observation is noted for other CNN models used in the study. In [14], Kuehlkamp et al suggest that discriminative power of the iris texture for gender is weak and that the gender-related information is primarily in the periocular region which is supported by our observations as well.…”
Section: Cnnsupporting
confidence: 90%
“…A possible explanation for this observation could be the stable and distinct gender cues for middle aged adults when compared to young and older adults. Further, the periocular region contains the gender cues (also confirmed in [14]) which differ across males and females.…”
Section: Key Findingsmentioning
confidence: 91%
“…We also used person-disjoint to prevent bias, meaning no subjects were shared between the training and validation data sets. 22 As our data set is not large, we used fivefold cross-validation to train and evaluate the performance of the DL model. 23 In cross-validation, the entire data set was split into five groups, and four groups were used as training data, whereas one group was used for validation.…”
Section: Methodsmentioning
confidence: 99%
“…The interest in privacy-enhancing mechanisms is not limited only to facial images. With the development of automatic recognition techniques and improvements in their capabilities, it is today possible to extract potentially sensitive information from a wide variety of biometric modalities, e.g., [311], [312], [313], [314]. B-PETs are, therefore, becoming increasingly relevant across the board and research efforts, focusing on privacy protection with modalities other than faces are expected to intensify in the future.…”
Section: Visual Privacy Beyond Facesmentioning
confidence: 99%