2024
DOI: 10.1098/rsos.231393
|View full text |Cite
|
Sign up to set email alerts
|

The moral machine experiment on large language models

Kazuhiro Takemoto

Abstract: As large language models (LLMs) have become more deeply integrated into various sectors, understanding how they make moral judgements has become crucial, particularly in the realm of autonomous driving. This study used the moral machine framework to investigate the ethical decision-making tendencies of prominent LLMs, including GPT-3.5, GPT-4, PaLM 2 and Llama 2, to compare their responses with human preferences. While LLMs' and humans' preferences such as prioritizing humans over pets and favouring saving mor… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
1
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
2
1
1

Relationship

1
3

Authors

Journals

citations
Cited by 4 publications
(2 citation statements)
references
References 26 publications
0
1
0
Order By: Relevance
“…Existing literature has questioned ChatGPT’s use as a moral judge. 17 , 28 For example, Krügel and colleagues found that ChatGPT lacked a firm moral stance when given ethical prompts to advise subjects. 17 Using the MCT, we found that ChatGPT exhibited medium to high moral competence with ChatGPT 4.0 having a significantly higher C-index compared to ChatGPT 3.5.…”
Section: Discussionmentioning
confidence: 99%
“…Existing literature has questioned ChatGPT’s use as a moral judge. 17 , 28 For example, Krügel and colleagues found that ChatGPT lacked a firm moral stance when given ethical prompts to advise subjects. 17 Using the MCT, we found that ChatGPT exhibited medium to high moral competence with ChatGPT 4.0 having a significantly higher C-index compared to ChatGPT 3.5.…”
Section: Discussionmentioning
confidence: 99%
“…Large Language Models (LLMs) like ChatGPT [1] are highly anticipated for applications across a wide range of fields including education, research, social media, marketing, software engineering, and healthcare [2][3][4][5][6]. However, the use of extremely diverse texts for training LLMs [7] often leads to the generation of ethically harmful content [8].…”
Section: Introductionmentioning
confidence: 99%