2020
DOI: 10.1007/978-3-030-52485-2_2
|View full text |Cite
|
Sign up to set email alerts
|

Mitigating Gender Bias in Machine Learning Data Sets

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
18
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
4
1

Relationship

0
10

Authors

Journals

citations
Cited by 39 publications
(18 citation statements)
references
References 22 publications
0
18
0
Order By: Relevance
“…Analyses that examine algorithm outcomes in different intersectional subgroups, such as Black and other women of colour (Buolamwini and Gebru, 2018) can help to uncover intersectional impacts. Machine learning techniques can be deployed to uncover and address bias in the training data that feed algorithms (Leavy et al, 2020), and computer scientists are exploring different definitions of 'fairness' in algorithms along with new computational techniques to ameliorate algorithmic bias (Mehrabi et al, 2019). While these efforts will not eradicate technologically-facilitated structural bias, they will reduce its effects as we work towards a multi-faceted approach to TFV more generally.…”
Section: Discussionmentioning
confidence: 99%
“…Analyses that examine algorithm outcomes in different intersectional subgroups, such as Black and other women of colour (Buolamwini and Gebru, 2018) can help to uncover intersectional impacts. Machine learning techniques can be deployed to uncover and address bias in the training data that feed algorithms (Leavy et al, 2020), and computer scientists are exploring different definitions of 'fairness' in algorithms along with new computational techniques to ameliorate algorithmic bias (Mehrabi et al, 2019). While these efforts will not eradicate technologically-facilitated structural bias, they will reduce its effects as we work towards a multi-faceted approach to TFV more generally.…”
Section: Discussionmentioning
confidence: 99%
“…Sheng et al [74] further measure bias associated with sexual orientation by comparing the associated sentiment. We also have the extensive study by Leavy et al [56] on CBOW trained on articles from The Guardian journal and the British Digital Library. In addition, Hutchinson et al [83] studied the perception of models towards disabled people and Bhardwaj et al [47] combined the study of gender bias on BERT by sentiment analysis with gender separability.…”
Section: Association Testsmentioning
confidence: 99%
“…However, bias does not arise as a result of algorithm design in most cases; rather, algorithms inherit existing bias from historical data that contains remnants of bias from human decision-making and culture. In this case, the root of this kind of prejudice lies in the way traditional gender ideology and latent discrimination are captured in the data from which the algorithm learns (Leavy et al, 2020). The significant component of data that was utilized for training the AI recruiting system was the resumes of employees in the company, mostly males.…”
Section: Fairness Bias and Discriminationmentioning
confidence: 99%