2021
DOI: 10.1145/3479507
|View full text |Cite
|
Sign up to set email alerts
|

Guilty Artificial Minds: Folk Attributions of Mens Rea and Culpability to Artificially Intelligent Agents

Abstract: While philosophers hold that it is patently absurd to blame robots or hold them morally responsible [1], a series of recent empirical studies suggest that people do ascribe blame to AI systems and robots in certain contexts [2]. This is disconcerting: Blame might be shifted from the owners, users or designers of AI systems to the systems themselves, leading to the diminished accountability of the responsible human agents [3]. In this paper, we explore one of the potential underlying reasons for robot blame, na… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
3
1

Year Published

2022
2022
2024
2024

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 21 publications
(12 citation statements)
references
References 103 publications
(121 reference statements)
2
3
1
Order By: Relevance
“…The current analysis, which does not document an aversion toward algorithm-mediated investments, is in contrast to earlier research that showed an aversion toward algorithms making investment decisions (Niszczota and Kaszás, 2020). Similar to Kirchkamp and Strobel (2019) and Stuart and Kneer (2021), sharing responsibility with machines led to neither the diffusion nor the exacerbation of responsibility.…”
Section: Towards a Model Of Financial Machinescontrasting
confidence: 99%
“…The current analysis, which does not document an aversion toward algorithm-mediated investments, is in contrast to earlier research that showed an aversion toward algorithms making investment decisions (Niszczota and Kaszás, 2020). Similar to Kirchkamp and Strobel (2019) and Stuart and Kneer (2021), sharing responsibility with machines led to neither the diffusion nor the exacerbation of responsibility.…”
Section: Towards a Model Of Financial Machinescontrasting
confidence: 99%
“…Our most surprising still was that AI assistants were considered more responsible for positive rather than negative outcomes. Our findings align with the inverse outcome effect for blame ascription, 16 where people blamed the AI system less when the outcome was harmful rather than neutral. Stuart & Kneer 16 suggest that the outcome effect arises because people, in case of a harmful outcome, apply a high(er) standard of moral agency—and more demanding standards of intentionality attribution—to identify the person responsible.…”
Section: Discussionsupporting
confidence: 85%
“…This supports our original hypothesis that instrumental AI-assistants are perceived as agents capable of sharing responsibility with other agents. 13 , 14 , 15 , 16 , 17 …”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…What does explain the suprising effect of praise and blame in our study? Stuart and Kneer (2021) have found that people assign more knowledge and blame to an autonomously acting robot when it doesn't commit any harm compared to when it commits harm. This "inverse outcome effect" mirrors the results found in our study.…”
Section: The Psychology Of Free Willmentioning
confidence: 99%