2020
DOI: 10.1177/2378023120967171
|View full text |Cite
|
Sign up to set email alerts
|

Diagnosing Gender Bias in Image Recognition Systems

Abstract: Image recognition systems offer the promise to learn from images at scale without requiring expert knowledge. However, past research suggests that machine learning systems often produce biased output. In this article, we evaluate potential gender biases of commercial image recognition platforms using photographs of U.S. members of Congress and a large number of Twitter images posted by these politicians. Our crowdsourced validation shows that commercial image recognition systems can produce labels that are cor… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
39
1

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 58 publications
(47 citation statements)
references
References 60 publications
1
39
1
Order By: Relevance
“…The measures of emotion used in our analyses all have the potential to be biased in their evaluations of the behavior of men and women. As machine learning systems get closer to replicating human behavior, they also replicate human biases (Schwemmer et al 2020). We recognize that these biases may have important theoretical and practical implications for our research.…”
Section: Algorithmic (Gender) Biases and Measuring Candidate Emotionmentioning
confidence: 83%
See 1 more Smart Citation
“…The measures of emotion used in our analyses all have the potential to be biased in their evaluations of the behavior of men and women. As machine learning systems get closer to replicating human behavior, they also replicate human biases (Schwemmer et al 2020). We recognize that these biases may have important theoretical and practical implications for our research.…”
Section: Algorithmic (Gender) Biases and Measuring Candidate Emotionmentioning
confidence: 83%
“…Facial displays of emotion: Emotion-detection APIs have a number of biases (including gender and racial biases) encoded in to their processes (Buolamwini and Gebru 2018). For example, Schwemmer et al (2020) find that classifiers are much more likely to assign terms associated with physical appearance to images of female (versus male) members of Congress. There are also gender biases in the classification of specific emotions: a neutral face, happiness, and anger tend to produce the lowest levels of gender bias (Khanal et al 2018).…”
Section: Algorithmic (Gender) Biases and Measuring Candidate Emotionmentioning
confidence: 99%
“…This does not mean we should blindly trust the algorithm to correctly annotate all images. Even though Google's AI has been tested to be more accurate than others (Oberoi, 2016), it has also appropriately been criticised for having racial biases, such as annotating Black people as "gorillas" (Simonite, 2018) or being systematically biased on gender (Schwemmer et al, 2020). The confidence score somewhat improves reliability by giving us a mechanism through which to rule out the most uncertain coding by the AI.…”
Section: Methods and Datamentioning
confidence: 99%
“…To detect race and gender bias in search outputs, we relied on their manual coding. While some earlier studies (e.g., [40]) use image recognition for extracting image features, its applicability for bias detection has been questioned recently [45] considering the possibility of recognition approaches being biased themselves. Hence, we used two coders to classify all the collected images based on categories listed below.…”
Section: Methodsmentioning
confidence: 99%