Proceedings of the Conference on Fairness, Accountability, and Transparency 2019
DOI: 10.1145/3287560.3287572
|View full text |Cite
|
Sign up to set email alerts
|

Bias in Bios

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
45
0
2

Year Published

2019
2019
2023
2023

Publication Types

Select...
5
4

Relationship

2
7

Authors

Journals

citations
Cited by 153 publications
(83 citation statements)
references
References 20 publications
0
45
0
2
Order By: Relevance
“…For example, in unsupervised learning, word embeddings often contain biases (Bolukbasi et al, 2016;Caliskan et al, 2017;Garg et al, 2018) which persist even after attempts to remove them (Gonen and Goldberg, 2019). There are many examples of bias in supervised learning contexts: YouTube's captioning models make more errors when transcribing women (Tatman, 2017), AAE is more likely to be misclassified as non-English by widely used language classifiers (Blodgett and O'Connor, 2017), numerous gender and racial biases exist in sentiment classification systems (Kiritchenko and Mohammad, 2018), and errors in both co-reference resolution systems and occupational classification models reflect gendered occupational patterns (Zhao et al, 2018;De-Arteaga et al, 2019).…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…For example, in unsupervised learning, word embeddings often contain biases (Bolukbasi et al, 2016;Caliskan et al, 2017;Garg et al, 2018) which persist even after attempts to remove them (Gonen and Goldberg, 2019). There are many examples of bias in supervised learning contexts: YouTube's captioning models make more errors when transcribing women (Tatman, 2017), AAE is more likely to be misclassified as non-English by widely used language classifiers (Blodgett and O'Connor, 2017), numerous gender and racial biases exist in sentiment classification systems (Kiritchenko and Mohammad, 2018), and errors in both co-reference resolution systems and occupational classification models reflect gendered occupational patterns (Zhao et al, 2018;De-Arteaga et al, 2019).…”
Section: Related Workmentioning
confidence: 99%
“…Recent work has shown evidence of substantial bias in machine learning systems, which is typically a result of bias in the training data. This includes both supervised (Blodgett and O'Connor, 2017;Tatman, 2017;Kiritchenko and Mohammad, 2018;De-Arteaga et al, 2019) and unsupervised natural language processing systems (Bolukbasi et al, 2016;Caliskan et al, 2017;Garg et al, 2018). Machine learning models are currently being deployed in the field to detect hate speech and abusive language on social media platforms including Facebook, Instagram, and Youtube.…”
Section: Introductionmentioning
confidence: 99%
“…For example, the sentence he is an engineer is more likely to appear in a corpus than she is an engineer due to the current gender disparity in engineering. Consequently, any NLP system that is trained such a corpus will likely learn to associate engineer with men, but not with women (De-Arteaga et al, 2019).…”
Section: Introductionmentioning
confidence: 99%
“…That is, artifacts like predictive accuracy are referred to as being "allocated" across people qua the group of which they are a member. 10 For example, researchers describe "fair allocation of predictions" across groups of people (Zliobaite, 2015) and refer to "allocational harms" that are understood as disparities in a model's accuracy for different groups of people (De-Arteaga et al, 2019;Davidson et al, 2019). In another example, a model's predictive accuracy is explicitly characterized as a "resource" to be fairly allocated via a distributive rule (Hashimoto et al, 2018).…”
Section: What Are Resources?mentioning
confidence: 99%