2020
DOI: 10.28968/cftt.v6i2.33043
|View full text |Cite
|
Sign up to set email alerts
|

Redistribution and Rekognition

Abstract: Computer scientists, and artificial intelligence researchers in particular, have a predisposition for adopting precise, fixed definitions to serve as classifiers (Agre, 1997; Broussard, 2018). But classification is an enactment of power; it orders human interaction in ways that produce advantage or suffering (Bowker & Star, 1999). In so doing, it obscures the messiness of human life, masking the work of the people involved in training machine learning systems, and hiding the uneven distribution of its impa… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
11
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
8
1
1

Relationship

0
10

Authors

Journals

citations
Cited by 17 publications
(11 citation statements)
references
References 34 publications
0
11
0
Order By: Relevance
“…Intersectional and feminist approaches to AI (Wellner & Rothman, 2020; West, 2020) invite to rethink and challenge discriminatory AI suggesting strategies that are polyvocal, multimodal and experimental (Ciston, 2019) and that promise a fairer, slower, consensual and collaborative AI (Toupin, 2023).…”
Section: Key Concepts: Ai Bias and Intersectionalitymentioning
confidence: 99%
“…Intersectional and feminist approaches to AI (Wellner & Rothman, 2020; West, 2020) invite to rethink and challenge discriminatory AI suggesting strategies that are polyvocal, multimodal and experimental (Ciston, 2019) and that promise a fairer, slower, consensual and collaborative AI (Toupin, 2023).…”
Section: Key Concepts: Ai Bias and Intersectionalitymentioning
confidence: 99%
“…21 Thus, bias can also be the result of algorithm design or decisions ll OPEN ACCESS around metrics used to evaluate a particular phenomenon. While some technical approaches to addressing bias exist, completely eliminating bias in algorithms is impossible, and, as some have argued, 22,23 exclusive focus on reducing bias in AI systems may distract from other, more important, interventions.…”
Section: Potential Concerns Articulated By the Drm Communitymentioning
confidence: 99%
“…Of particular, challenge is the inability of AI systems to accurately record and incorporate social data, which include cultural beliefs, economic status, linguistic identity and other social determinants of health 19. While efforts to address algorithmic bias are increasingly interdisciplinary, some scholars argue that the dominant approach of relying on data science mechanisms alone fails to address structural and systemic causes of marginalisation 20–22. Feminist ethicopolitical critiques highlight factors including historically entrenched power structures,22 exclusionary social demographics in the AI workforce,20 21 and sociocultural legacies of colonialism23 as some of the key drivers of algorithmic bias that deserve attention.…”
Section: Introductionmentioning
confidence: 99%