2022
DOI: 10.3389/frai.2022.976838
|View full text |Cite
|
Sign up to set email alerts
|

Exploring gender biases in ML and AI academic research through systematic literature review

Abstract: Automated systems that implement Machine learning (ML) and Artificial Intelligence (AI) algorithms present promising solutions to a variety of technological and non-technological issues. Although, industry leaders are rapidly adopting these systems for anything from marketing to national defense operations, these systems are not without flaws. Recently, many of these systems are found to inherit and propagate gender and racial biases that disadvantages the minority population. In this paper, we analyze academi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 21 publications
(8 citation statements)
references
References 112 publications
0
5
0
Order By: Relevance
“…A widespread, systemic, and implicit gender bias exists in most of the fields in which AI-ML is widely used, including search and ranking algorithms; systems of online recommendations; robotics; Natural Language Processing (NLP); and automated decision-support systems that are used in social programmes, national defence, justice, medicine and healthcare, and policing. They are programmed with a binary concept of gender which does not reflect the real world, and completely ignores the complexity of identities, most common among the Y-and Z-generation members (Shrestha and Das, 2022). Algorithmic unfairness has been starkly evidenced in the groundbreaking paper "Gender shades: intersectional accuracy disparities in commercial gender classification" (Buolamwini and Gebru, 2018), which demonstrated that facial recognition systems were more than 30% inaccurate at classifying the faces of women of colour but had the greatest accuracy for white men.…”
Section: Nato's Technology Problemmentioning
confidence: 99%
See 1 more Smart Citation
“…A widespread, systemic, and implicit gender bias exists in most of the fields in which AI-ML is widely used, including search and ranking algorithms; systems of online recommendations; robotics; Natural Language Processing (NLP); and automated decision-support systems that are used in social programmes, national defence, justice, medicine and healthcare, and policing. They are programmed with a binary concept of gender which does not reflect the real world, and completely ignores the complexity of identities, most common among the Y-and Z-generation members (Shrestha and Das, 2022). Algorithmic unfairness has been starkly evidenced in the groundbreaking paper "Gender shades: intersectional accuracy disparities in commercial gender classification" (Buolamwini and Gebru, 2018), which demonstrated that facial recognition systems were more than 30% inaccurate at classifying the faces of women of colour but had the greatest accuracy for white men.…”
Section: Nato's Technology Problemmentioning
confidence: 99%
“…However, in reality, cyberspace is forged by the intersectionality of age, economic status, culture, gender, and severely biased algorithms. While many gender-bias detection and mitigation methods have been proposed in the literature, they are not widely applied nor are the ethical and legal aspects discussed (Shrestha and Das, 2022).…”
Section: Nato's Technology Problemmentioning
confidence: 99%
“…A potentially significant, yet subtle, consequence of improper data collection might be an algorithm that performs poorly for certain subgroups or subpopulations with the targeted disease or condition as a result of under‐representation of those subgroups in the training set 30,31 . In radiology applications, it is important to be vigilant so that training/validation dataset selection incorporates safeguards to minimize underlying distortions for under‐represented and/or vulnerable populations and so that already‐existing health‐care inequities are not perpetuated or exacerbated 27,32–34 …”
Section: Datamentioning
confidence: 99%
“…Concerning fairness of AI, Shrestha and Das point out that although, the relevant discussion within the realm of ML and AI is a recent development, fairness problems and discrimination have roots within human society where the unfair treatment toward the minority is documented in the data that has been created over time and therefore, the inadvertent learning and perpetuation of implicit biases is a longstanding issue in ML/AI systems due to historical biases in the data we have accumulated, which can lead to discriminatory outcomes such as an advertisement algorithm showing more high-paying technical jobs to men than women [190].…”
Section: Black Box Problem and Explainabilitymentioning
confidence: 99%