2022
DOI: 10.31234/osf.io/wn7ae
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

AI in the Government: Responses to Failures

Abstract: Artificial Intelligence (AI) is pervading the government and transforming how public services are provided to consumers—from allocation of government benefits to enforcement of the law, monitoring of risks, and provision of services. Despite technological improvements, AI systems are fallible and may err. How do consumers respond when learning of AI’s failures? In thirteen preregistered studies (N = 3,724) across policy areas, we show that algorithmic failures are generalized more broadly than human failures. … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(1 citation statement)
references
References 0 publications
0
1
0
Order By: Relevance
“…Consequently, raising awareness of algorithmic racial bias can deter Black consumers, but not White consumers, from using “good” (i.e., fair and beneficial) algorithms. Relatedly, Longoni et al, (forthcoming) found that people are more likely to generalize algorithmic failures than human failures.…”
Section: Review Integration and Predictionmentioning
confidence: 99%
“…Consequently, raising awareness of algorithmic racial bias can deter Black consumers, but not White consumers, from using “good” (i.e., fair and beneficial) algorithms. Relatedly, Longoni et al, (forthcoming) found that people are more likely to generalize algorithmic failures than human failures.…”
Section: Review Integration and Predictionmentioning
confidence: 99%