2023
DOI: 10.5210/fm.v28i1.12903
|View full text |Cite
|
Sign up to set email alerts
|

Definition drives design: Disability models and mechanisms of bias in AI technologies

Abstract: The increasing deployment of artificial intelligence (AI) tools to inform decision-making across diverse areas including healthcare, employment, social benefits, and government policy, presents a serious risk for disabled people, who have been shown to face bias in AI implementations. While there has been significant work on analysing and mitigating algorithmic bias, the broader mechanisms of how bias emerges in AI applications are not well understood, hampering efforts to address bias where it begins. In this… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(6 citation statements)
references
References 40 publications
0
6
0
Order By: Relevance
“…The cultural context of the AI developers and industry, e.g. their biases and prejudices, influence the way the technology is designed, developed, and implemented (Ntoutsi et al, 2020;Baker and Hawn, 2021;Pagano et al, 2023;Shachar and Gerke, 2023;Newman-Griffis et al, 2023). In turn, the AI use feeds back into reinforcing or amplifying the cultural practices that contributed to it.…”
Section: Preprint -Please Cite the Originalmentioning
confidence: 99%
“…The cultural context of the AI developers and industry, e.g. their biases and prejudices, influence the way the technology is designed, developed, and implemented (Ntoutsi et al, 2020;Baker and Hawn, 2021;Pagano et al, 2023;Shachar and Gerke, 2023;Newman-Griffis et al, 2023). In turn, the AI use feeds back into reinforcing or amplifying the cultural practices that contributed to it.…”
Section: Preprint -Please Cite the Originalmentioning
confidence: 99%
“…This places assessment of AI risks, as well as AI governance, in a proactive focus on the social contexts where AI systems are designed and used, rather than a reactive, technology-centred perspective. This contextualised perspective anchors the questions of where ethical risks and failures arise and what we can do about them in specific decisions made by people and organisations during AI design and implementation, instead of after systems are extant and released into the world (11). AI thinking can therefore make management of AI risk more tractable and actionable both for internal management and external governance.…”
Section: Responsible Ai Frameworkmentioning
confidence: 99%
“…These programs attempt to provoke a response for the general user that helps them better "appreciate" the disability in question, although its effects are likely to be grounded in fear of the disability or a problematic sympathy for the disability, both of which can easily marginalize the personhood of people with the disability in question. The other conception we have discussed leans more toward the political/relational model of disability, which centers on disabled people's political agency (Kafer, 2013;Newman-Griffis et al, 2023).…”
Section: Teaching the Alternative And Beyond: Intersectional Disabili...mentioning
confidence: 99%
“…In this article, we use the term "disabled people" based on a social model approach. Additionally, our framing is rooted in the political/relational model of disability that centers on disabled people's political agency (Kafer, 2013;Newman-Griffis et al, 2023). We supply key Japanese terms with original kanji when they are first mentioned in this article, and where it is appropriate.…”
Section: Introductionmentioning
confidence: 99%