2018
DOI: 10.1016/j.cognition.2018.08.003
|View full text |Cite
|
Sign up to set email alerts
|

People are averse to machines making moral decisions

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

23
286
4
5

Year Published

2019
2019
2024
2024

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 339 publications
(318 citation statements)
references
References 70 publications
23
286
4
5
Order By: Relevance
“…Moreover, the human nurse was generally trusted more than the robot nurse, and this effect was slightly stronger when the patient's autonomy was violated. Together, these results strongly suggest that, in our context, decisions involving AIs are morally condemnable (compared with similar decisions made by humans) only when in violation of personal autonomy; this raises some tensions between our results and those of Bigman and Gray (2018), which suggest that the aversion to robot decisions is driven by mind perception.…”
Section: Discussion Of Studysupporting
confidence: 49%
See 2 more Smart Citations
“…Moreover, the human nurse was generally trusted more than the robot nurse, and this effect was slightly stronger when the patient's autonomy was violated. Together, these results strongly suggest that, in our context, decisions involving AIs are morally condemnable (compared with similar decisions made by humans) only when in violation of personal autonomy; this raises some tensions between our results and those of Bigman and Gray (2018), which suggest that the aversion to robot decisions is driven by mind perception.…”
Section: Discussion Of Studysupporting
confidence: 49%
“…Moral psychology of robotics and artificial intelligence (AI) is a new growing field of research on moral cognition (Awad et al, 2018, Malle et al, 2015, Bonnefon et al, 2016. In the past few years, several articles have examined the moral relations between humans and new emerging AI technologies from different angles; such as the moral psychological aspects of robot prostitution (Koverola et al, in press), self-driving vehicles (Awad et al, 2018, Malle et al, 2015, Bonnefon et al, 2016, or the moral interaction in medical decision-making between robots and humans (Bigman & Gray, 2018). Recent studies have also drawn from transhumanist themes and examined the role of sacredness and sexual disgust in condemning mind upload technologies , and human attitudes towards brain enhancement technologies (Medaglia, Yaden, Helion & Haslam, 2019).…”
Section: Moral Psychology Of Nursing Robots -Humans Dislike Violationmentioning
confidence: 99%
See 1 more Smart Citation
“…Similarly, Złotowski, Yogeeswaran, and Bartneck () found that when robots were depicted in a video as being autonomous they elicited more negative attitudes and perceptions of threat than robots that were non‐autonomous. Interestingly, when asked about whether robots should make moral decisions (e.g., deciding on the punishment for a convicted criminal), people are uncomfortable with robots making such decisions because they are perceived to lack complete minds (Bigman & Gray, ).…”
Section: Robots As a Threatmentioning
confidence: 99%
“…Furthermore, since self-awareness may lead to the development of a theory of mind (see the last section), a self-aware and "mentalizing" robot could better cooperate with humans and other AI agents. Bigman and Gray (2018) suggest that increasing elements of robot self-awareness as the theory of mind, situation awareness, intention, free will, could serve as a foundation for increasing human trust in robot autonomy because humans tend to judge the role of these and other perceived mental faculties as necessary in autonomy.…”
Section: Why Would Self-awareness Benefit Robots?mentioning
confidence: 99%