2020
DOI: 10.48550/arxiv.2007.02423
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Participation is not a Design Fix for Machine Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
36
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
4
1

Relationship

0
9

Authors

Journals

citations
Cited by 29 publications
(36 citation statements)
references
References 0 publications
0
36
0
Order By: Relevance
“…Future work should identify challenges for implementing QA in applied ML and AI development contexts, acknowledging the constraints that ML and AI development environments present. 1 Practitioners of these strategies must ensure that they engage with and compensate participants with care and respect, and that their input is meaningfully integrated into system design [38].…”
Section: Discussionmentioning
confidence: 99%
“…Future work should identify challenges for implementing QA in applied ML and AI development contexts, acknowledging the constraints that ML and AI development environments present. 1 Practitioners of these strategies must ensure that they engage with and compensate participants with care and respect, and that their input is meaningfully integrated into system design [38].…”
Section: Discussionmentioning
confidence: 99%
“…However, removing human agency from the training phase also removes any possibility for ethical accountability and oversight from the critical part of machine learning. In response we see a series of human-in-the-loop solutions that have tried to combine the engineering benefits of these approaches with the accountability of human agency (Gupta et al 2020, Sloane et al 2020). These include fairness analytics tools such as AIFairness360, the What-If tool, and explainability tools such as LIME.…”
Section: Machine Learning and Its Ethical Discontentmentioning
confidence: 99%
“…Although having more diverse AI teams is critical, the "politics of inclusion" [22,92] of relying on marginalized practitioners may not be sufficient to effect systemic change given the dominant structural forces [e.g., 64] that might lead to those practitioners being ignored, tokenized, or fired [22,31,92]. Recent calls to foster greater engagement with affected communities when developing sociotechnical systems may serve as a counter to business imperatives [e.g., 20, 80], although it is important to be wary about extractive, tokenistic approaches [e.g., 3,4,74], which can unfairly burden members of marginalized groups [59].…”
Section: Implications Of Business Imperatives That Shape Disaggregate...mentioning
confidence: 99%
“…This question has a long historical resonance given Suchman's argument to attend to the specificities of place in technology development practices, including its micropolitics and cultural imaginaries, to avoid the reproduction of "neocolonial geographies of center and periphery" (and, more generally, to resist such neat binaries of center and periphery, local and global) [77]. More recently, Sloane et al argued that the ever-increasing drive to deploy AI systems at scale may be fundamentally at odds with calls to involve direct stakeholders in the development of those systems [74]. Our findings reveal implications of this tension, as many participants reported deploying AI systems in geographic contexts for which they have no processes for engaging with direct stakeholders.…”
Section: Implications Of Deploying Ai Systems At Scalementioning
confidence: 99%