2020
DOI: 10.1145/3386296.3386305
|View full text |Cite
|
Sign up to set email alerts
|

Unintended machine learning biases as social barriers for persons with disabilitiess

Abstract: Persons with disabilities face many barriers to full participation in society, and the rapid advancement of technology has the potential to create ever more. Building equitable and inclusive technologies for people with disabilities demands paying attention to more than accessibility, but also to how social attitudes towards disability are represented within technology. Representations perpetuated by machine learning (ML) models often inadvertently encode undesirable social biases from the data on which they a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
24
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 29 publications
(29 citation statements)
references
References 13 publications
1
24
0
Order By: Relevance
“…Good documentation should discuss and explain features, providing context about who collected and annotated the data, how and for which purpose (Gebru et al, 2018;Denton et al, 2020). This provides dataset users with information they can leverage to select appropriate datasets for their tasks and avoid unintentional misuse (Gebru et al, 2018).…”
Section: Transparencymentioning
confidence: 99%
“…Good documentation should discuss and explain features, providing context about who collected and annotated the data, how and for which purpose (Gebru et al, 2018;Denton et al, 2020). This provides dataset users with information they can leverage to select appropriate datasets for their tasks and avoid unintentional misuse (Gebru et al, 2018).…”
Section: Transparencymentioning
confidence: 99%
“…Violent and pornographic content can cause children shock and disgust. A study from 2020 revealed that in the European Union the most reported harmful content that children were exposed to (at least monthly) were hate messages 1 (average of 17%) followed by violent images (average of 13%) (Smahel et al, 2020 [14]) (Council of Europe, 2018 [15]) This report also found that exposure to different kinds of harmful content is interrelated. For instance, if a child sees one type of harmful content, it is more likely that the same child will also report seeing other types of harmful content (Smahel et al, 2020 [14]).…”
Section: Content Risksmentioning
confidence: 99%
“…Few works are found to solve this problem, and more research is required to produce the cyberbullying detectors using machine learning algorithms that are unbiased and transparent. Several works proposed methods to mitigate the unintended bias in the word embeddings [171][172][173]. Similarly, measuring and mitigating the classification algorithms' bias is also important for a fairer performance of the classifier [173].…”
Section: Handling Of a Dynamic Corpusmentioning
confidence: 99%
“…Several works proposed methods to mitigate the unintended bias in the word embeddings [171][172][173]. Similarly, measuring and mitigating the classification algorithms' bias is also important for a fairer performance of the classifier [173]. Data is collected and annotated by subjective human annotations.…”
Section: Handling Of a Dynamic Corpusmentioning
confidence: 99%