2022
DOI: 10.48550/arxiv.2206.14983
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

A Validity Perspective on Evaluating the Justified Use of Data-driven Decision-making Algorithms

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(9 citation statements)
references
References 0 publications
0
9
0
Order By: Relevance
“…The domains where agencies are attempting to apply AI are often highly socially complex and high-stakes-including tasks like screening child maltreatment reports [58], allocating housing to unhoused people [35], predicting criminal activity [32], or prioritizing medical care for patients [45]. In these domains, where some public sector agencies have a fraught history of interactions with marginalized communities [4,54], it has proven to be particularly challenging to design AI systems that avoid further perpetuating social biases [10], obfuscating how decisions are made [31], or relying on inappropriate quantitative notions of what it means to make accurate decisions [13]. Public sector agencies are increasingly under fire for implementing AI tools that fail to bring value to the communities they serve, contributing to a common trend: AI tools are implemented then discarded after failing in practice [21,56,67,68].…”
Section: Background 21 Public Sector Ai and Overcoming Ai Failuresmentioning
confidence: 99%
See 2 more Smart Citations
“…The domains where agencies are attempting to apply AI are often highly socially complex and high-stakes-including tasks like screening child maltreatment reports [58], allocating housing to unhoused people [35], predicting criminal activity [32], or prioritizing medical care for patients [45]. In these domains, where some public sector agencies have a fraught history of interactions with marginalized communities [4,54], it has proven to be particularly challenging to design AI systems that avoid further perpetuating social biases [10], obfuscating how decisions are made [31], or relying on inappropriate quantitative notions of what it means to make accurate decisions [13]. Public sector agencies are increasingly under fire for implementing AI tools that fail to bring value to the communities they serve, contributing to a common trend: AI tools are implemented then discarded after failing in practice [21,56,67,68].…”
Section: Background 21 Public Sector Ai and Overcoming Ai Failuresmentioning
confidence: 99%
“…Many failures in public sector AI projects can be traced back to decisions made during the earliest problem formulation and ideation stages of AI design [13,47,65,68]. AI design concepts that make it to production may be "doomed to fail" from the very beginning, for a variety of reasons.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Problem formulation and fairness. Scholars have identified various reasons why the choice of target might raise concerns with fairness: some outcomes or qualities of interests might just be more evenly distributed across the population than others [23,34]; certain outcomes or qualities of interests might be easier to predict with similar degrees of accuracy across the population than others [8]; some kinds of selection bias might cause certain outcomes or qualities of interest to be observed more or less frequently in certain groups rather than others, even if they occur at similar rates in reality [27]; certain targets might suffer from more so-called "label bias" than others-that is, systematically less accurate observations of the true value of the target for members of some groups than others [9,21,22]. Indeed, one way to understand the Obermeyer et al study is as a form of label bias since healthcare costs acted as a systematically inaccurate measure of underlying healthcare needs.…”
Section: Related Workmentioning
confidence: 99%
“…We remove several co-linear features from the original Obermeyer et al dataset so that models can be fit with OLS regression (instead of regularized regression). Specifically, we remove features whose variable name matches the following regular expression: 8 (gagne_sum_tm1|normal_tm1|esr_. * -low_tm1|crp_(min|mean|max).…”
Section: Dataset Detailsmentioning
confidence: 99%