2016
DOI: 10.1007/s10677-016-9745-2
|View full text |Cite
|
Sign up to set email alerts
|

The Ethics of Accident-Algorithms for Self-Driving Cars: an Applied Trolley Problem?

Abstract: Self-driving cars hold out the promise of being safer than manually driven cars. Yet they cannot be a 100 % safe. Collisions are sometimes unavoidable. So self-driving cars need to be programmed for how they should respond to scenarios where collisions are highly likely or unavoidable. The accident-scenarios self-driving cars might face have recently been likened to the key examples and dilemmas associated with the trolley problem. In this article, we critically examine this tempting analogy. We identify three… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

2
138
0
2

Year Published

2017
2017
2020
2020

Publication Types

Select...
5
5

Relationship

1
9

Authors

Journals

citations
Cited by 227 publications
(153 citation statements)
references
References 20 publications
2
138
0
2
Order By: Relevance
“…46,47 Although there are some points of disanalogy that I will not discuss, in its simplest form the trolley problem supposes that there is a runaway trolley on train tracks heading directly for 5 people who are, inexplicably, tied to the tracks. 48 You, the reader, are standing beside a lever that, if pulled, will switch the trolley to a different track that has only 1 person tied to it. You can either do nothing, allowing the speeding trolley to kill the 5 people on the main track, or divert the trolley by pulling the lever, resulting in the death of just 1 person.…”
Section: Forced-choice Algorithmsmentioning
confidence: 99%
“…46,47 Although there are some points of disanalogy that I will not discuss, in its simplest form the trolley problem supposes that there is a runaway trolley on train tracks heading directly for 5 people who are, inexplicably, tied to the tracks. 48 You, the reader, are standing beside a lever that, if pulled, will switch the trolley to a different track that has only 1 person tied to it. You can either do nothing, allowing the speeding trolley to kill the 5 people on the main track, or divert the trolley by pulling the lever, resulting in the death of just 1 person.…”
Section: Forced-choice Algorithmsmentioning
confidence: 99%
“…The trolley problem is a classic philosophical expression of this scenario, and its direct relevance and applicability to the challenge of programming driverless cars has been noted on more than one occasion recently (Markoff 2015, p. 61;Nyholm and Smids 2016;Fleetwood 2017;Holstein 2017;Holstein and Dodig-Crnkovic 2018;Wiseman and Grinberg 2018;Himmelreich 2018;Renda 2018;Liu 2018). In a real or imagined future, a self-driving trolley may, in some situations, avoid the trolley problem altogether through a series of prior decisions, such as a judicious application of brakes without need for human intervention to bring this about.…”
Section: Can Robots Teach Us To Be Ethical?mentioning
confidence: 99%
“…This paper is a contribution to the new field of the ethics of automated driving (e.g. Goodall 2014a, b;Lin 2015;Hevelke and Nida-Rümelin 2014;Gurney 2016;Gogoll and Müller 2016;Nyholm and Smids 2016;Nyholm forthcoming). Its aim is to argue that this field should take mixed traffic very seriously.…”
Section: Introductionmentioning
confidence: 99%