2019
DOI: 10.1007/978-3-030-13469-3_68
|View full text |Cite
|
Sign up to set email alerts
|

Measuring the Gender and Ethnicity Bias in Deep Models for Face Recognition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
27
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
6
1
1

Relationship

1
7

Authors

Journals

citations
Cited by 42 publications
(37 citation statements)
references
References 16 publications
0
27
0
Order By: Relevance
“…Our study began with this question, which was investigated through the COVID-19 issue, the topic of greatest interest to people around the world. Recently, some studies on AI bias according to gender have been conducted mainly in image recognition and natural language processing fields ( Acien et al, 2018 ; Costa-jussà, 2019 ), but most AI studies have focused on improving performance through trial and error: hyperparameter search on networks ( Forghani, 2020 ). Especially with the prediction model of severity or mortality at an early stage of COVID-19 ( Altschul et al, 2020 ; Zhu et al, 2020 ; Lessmann et al, 2021 ; Paiva Proença Lobo Lopes et al, 2021 ; Shan et al, 2021 ; Yaşar et al, 2021 ), to our best knowledge, this paper serves as the first attempt to investigate the gender-specific models.…”
Section: Discussionmentioning
confidence: 99%
“…Our study began with this question, which was investigated through the COVID-19 issue, the topic of greatest interest to people around the world. Recently, some studies on AI bias according to gender have been conducted mainly in image recognition and natural language processing fields ( Acien et al, 2018 ; Costa-jussà, 2019 ), but most AI studies have focused on improving performance through trial and error: hyperparameter search on networks ( Forghani, 2020 ). Especially with the prediction model of severity or mortality at an early stage of COVID-19 ( Altschul et al, 2020 ; Zhu et al, 2020 ; Lessmann et al, 2021 ; Paiva Proença Lobo Lopes et al, 2021 ; Shan et al, 2021 ; Yaşar et al, 2021 ), to our best knowledge, this paper serves as the first attempt to investigate the gender-specific models.…”
Section: Discussionmentioning
confidence: 99%
“…However, in order to be acceptable in a practical context, that performance level must be retained when implemented in a real-world context, using non-pristine images, and in a cost/form factor that is realistic for a store to deploy. Further, knowing that many machine-learning solutions are susceptible to degradation resulting from training dataset mismatch with respect to ethnicity [11], gender [12], image quality [13], lighting conditions [14], and combinations of these parameters with other characteristics [15][16][17], we chose to intentionally bombard the neural net model with different presentation attacks to quantify how quickly performance degrades.…”
Section: Motivationmentioning
confidence: 99%
“…Other recent works in face recognition (FR) technology introduce additional modalities, such as profile information, to the problem of bias [38], [39]. Other questions concern the measures of biases in FR systems [28], [33], FR templates [40], score level biases [27], and biases in existing (i.e., trained) models [41]. Wang et al introduced a reinforcement learning-based race balanced network to find optimal margins for non-Caucasians as a Markov decision process before passing to the deep model that learns policies for an agent to select margins as an approximation of the Q-value function-it reduces variations in scattering between features across races [42].…”
Section: Bias In Frmentioning
confidence: 99%
“…We train a multi-layered perceptron (MLP) to classify subgroups on top of the features to show the extent that the proposed removes subgroup information. We can then measure the amount of information present in the face representation [33].…”
Section: Privacy Preserving Experimentsmentioning
confidence: 99%
See 1 more Smart Citation