2023
DOI: 10.1016/j.isci.2023.107494
|View full text |Cite
|
Sign up to set email alerts
|

Intelligence brings responsibility - Even smart AI assistants are held responsible

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
1
1

Relationship

1
5

Authors

Journals

citations
Cited by 7 publications
(2 citation statements)
references
References 62 publications
(81 reference statements)
0
2
0
Order By: Relevance
“…Note, however, that until AI can be designed to exhibit an awareness of right and wrong, it would be a conceptual mistake to transfer our praise or blame onto an artificial agent. The argument here is not that our practices of blame and praise are biased or flawed (see, e.g., Longin et al, 2023 andPorsdam et al, 2023, for empirical and normative discussion of this point). The argument is that our practices demand that the goal of moral evaluation is to shape the recipient's conscience, structure her cognitive repertoire to better align with shared values.…”
Section: Why Morally Trustworthy Ai Is Unnecessarymentioning
confidence: 98%
“…Note, however, that until AI can be designed to exhibit an awareness of right and wrong, it would be a conceptual mistake to transfer our praise or blame onto an artificial agent. The argument here is not that our practices of blame and praise are biased or flawed (see, e.g., Longin et al, 2023 andPorsdam et al, 2023, for empirical and normative discussion of this point). The argument is that our practices demand that the goal of moral evaluation is to shape the recipient's conscience, structure her cognitive repertoire to better align with shared values.…”
Section: Why Morally Trustworthy Ai Is Unnecessarymentioning
confidence: 98%
“…However, the increasing development of AI systems that assist and collaborate with humans, rather than replacing them (Balazadeh Meresht et al, 2022;De et al, 2020De et al, , 2021Mozannar et al, 2022;Okati et al, 2021;Raghu et al, 2019;Straitouri et al, 2021;Wilder et al, 2021), calls for more empirical and theoretical research to shed light on the way humans make responsibility judgments in situations involving human-AI teams (Cañas, 2022). Recent work in that area has identified several factors that influence responsibility judgments (Awad et al, 2020;Lima et al, 2021;Longin et al, 2023). However, this work has not attempted to characterize the underlying cognitive processes that support such judgments.…”
Section: Introductionmentioning
confidence: 99%