2018
DOI: 10.1038/s41586-018-0637-6
|View full text |Cite
|
Sign up to set email alerts
|

The Moral Machine experiment

Abstract: With the rapid development of Artificial Intelligence come concerns about how machines will make moral decisions, and the major challenge of quantifying societal expectations about the ethical principles that should guide machine behavior. To address this challenge, we deployed the Moral Machine, an online experimental platform designed to explore the moral dilemmas faced by autonomous vehicles. This platform gathered 40 million decisions in ten languages from millions of people in 233 countries and territorie… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

27
780
2
23

Year Published

2019
2019
2022
2022

Publication Types

Select...
4
3
1

Relationship

1
7

Authors

Journals

citations
Cited by 1,125 publications
(929 citation statements)
references
References 21 publications
27
780
2
23
Order By: Relevance
“…Differences in moral preferences lie at the heart of many types of conflicts between individuals and groups (1-3) and have even led to wars between nations (4-6). More generally, such differences in moral preferences account for the substantial variation in how we judge the actions of other humans and artificial agents (7). Given the relevance and timeliness of moral preferences, it is remarkable how little we understand about the neural and cognitive mechanisms that determine our moral preferences.…”
Section: Main Textmentioning
confidence: 99%
See 2 more Smart Citations
“…Differences in moral preferences lie at the heart of many types of conflicts between individuals and groups (1-3) and have even led to wars between nations (4-6). More generally, such differences in moral preferences account for the substantial variation in how we judge the actions of other humans and artificial agents (7). Given the relevance and timeliness of moral preferences, it is remarkable how little we understand about the neural and cognitive mechanisms that determine our moral preferences.…”
Section: Main Textmentioning
confidence: 99%
“…We measured these two types of subjective valuation processes with structurally equivalent choice tasks that differed only in the content of the choice-options: Valuing human lives for moral decisions and valuing monetary rewards for financial decisions. We decided to focus on human lives since subjective moral values are essential for the difficult decisions whether some lives are more valuable than others, and since there are considerable individual differences in this regard (7). One example are decisions about recipients of an organ transplant, for which it is often required to implement a policy ranking among the potential recipients to decide who is most deserving to receive the organ (36).…”
Section: Main Textmentioning
confidence: 99%
See 1 more Smart Citation
“…The COMEST report on robotics ethics from World Commission on the Ethics of Scientific Knowledge and Technology () has a more complex view of the issue of liability, stating that it is a case of diluted or shared responsibility between “(…) robot designers, engineers, programmers, manufacturers, investors, sellers and users” and that due to their functional versatility, robots can have “(…) implications far beyond the intentions of their developers.” In order to better define the challenges and capture the concerns, several individuals and organizations, such as the Global Initiative on Ethics of Autonomous and Intelligent Systems, promoted by The Institute of Electrical and Electronics Engineers (), have crowd sourced the practical considerations and the ethical guidelines in HRI for the makers of this technology. Other attempts have been made to create a set of rules for machines to make moral decisions that are based on human moral decision‐making (e.g., Awad et al, ). In addition, several robot‐related standards are being discussed and developed by the IOS (see https://www.iso.org/, e.g.., ISO/TC299 Robotics).…”
Section: Ethical Issues and Practical Considerationsmentioning
confidence: 99%
“…It was shown recently (Awad et al. , 61) that Japanese and Western citizens disagree significantly as to whether an automated car should, in an emergency, seek to save an old pedestrian or a child.…”
mentioning
confidence: 99%