2022
DOI: 10.1007/s13347-022-00529-z
|View full text |Cite
|
Sign up to set email alerts
|

Blame It on the AI? On the Moral Responsibility of Artificial Moral Advisors

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
8
0
1

Year Published

2022
2022
2024
2024

Publication Types

Select...
8
1

Relationship

1
8

Authors

Journals

citations
Cited by 17 publications
(10 citation statements)
references
References 55 publications
1
8
0
1
Order By: Relevance
“…The results of the present study confirm the necessity to define clear rules to regulate the way artificial moral advisors are trained and used in everyday life (e.g., Constantinescu et al, 2022;IEEE, 2017;Kobis et al, 2019;Russell et al, 2015). Some authors have proposed to adjust the "behavior" of artificial moral advisors to the moral preferences of the user (e.g., Giubilini & Savulescu, 2018), which appears as a good solution at least for a personal use of this technology.…”
Section: Discussionsupporting
confidence: 80%
“…The results of the present study confirm the necessity to define clear rules to regulate the way artificial moral advisors are trained and used in everyday life (e.g., Constantinescu et al, 2022;IEEE, 2017;Kobis et al, 2019;Russell et al, 2015). Some authors have proposed to adjust the "behavior" of artificial moral advisors to the moral preferences of the user (e.g., Giubilini & Savulescu, 2018), which appears as a good solution at least for a personal use of this technology.…”
Section: Discussionsupporting
confidence: 80%
“…Two approaches come to mind. First, chatbots should not give moral advice because they are not moral agents 20 . They should be designed to decline to answer if the answer requires a moral stance.…”
Section: Discussionmentioning
confidence: 99%
“…In this article we follow the mainstream approach and adopt the standard Aristotelian view of friendship. The main reason for taking this stance is that we consider it is still robust and relevant today, in particular because the broader framework of virtue ethics has already proved useful in inquiries over moral status, moral agency and moral responsibility in the fields of machine ethics ( Wallach and Allen, 2008 ; Howard and Muntean, 2017 ) and Human-Robot Interaction ( Cappuccio, Peeters and McDonald, 2020 ; Peeters and Haselager, 2021 ), and even more when applied to robotic AI systems ( Hakli and Mäkelä, 2019 ; Coeckelbergh, 2020 ; Sison and Redín, 2021 ; Constantinescu et al, 2022 ). Another reason is because we consider that alternative contemporary accounts of friendship did not (yet) provide sufficient grounds for us to give up the standard Aristotelian account, but rather to annotate it.…”
Section: Robot Friendship Moral Agency and Virtue Ethicsmentioning
confidence: 99%
“…In the Aristotelian framework, conditions for moral agency and moral responsibility are inherently intertwined with conditions for virtue of character, for it is only when individuals act as a result of their virtue or vice that we might hold them praise- or blame-worthy ( Meyer, 2011 ). Drawing on these Aristotelian distinctions, in order to be a moral agent and bear moral responsibility, one needs to be able to 1) causally generate an outcome while 2) acting freely, uncoerced, and 3) be knowledgeable of the contextual circumstances of their action, following 4) deliberation based on rational choice, involving reason and aforethought (for a broader discussion see Constantinescu et al, 2022 ). Aristotle’s discussion over these four criteria highlights that children are not yet moral agents because they lack deliberation ( prohairesis ), which is a constitutive condition of moral agency and moral responsibility, requiring agents to be able to act based on rational choice.…”
Section: Robot Friendship Moral Agency and Virtue Ethicsmentioning
confidence: 99%