2018
DOI: 10.1111/phc3.12506
|View full text |Cite
|
Sign up to set email alerts
|

The ethics of crashes with self‐driving cars: A roadmap, II

Abstract: Self-driving cars hold out the promise of being much safer than regular cars. Yet they cannot be 100% safe. Accordingly, we need to think about who should be held responsible when self-driving cars crash and people are injured or killed. | INTRODUCTIONSome major car manufacturers have recently promised that when the fully self-driving cars that they are developing are ready to go on the market, they will take responsibility in case any crashes occur. Volvo and Audi are among theThis is an open access article … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
20
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
5
3
1

Relationship

1
8

Authors

Journals

citations
Cited by 47 publications
(22 citation statements)
references
References 31 publications
0
20
0
Order By: Relevance
“…The obscuration of accountability just described has its counterparts in many other domains where artificial intelligence is being deployed—most notably self-driving vehicles. By looking at the ethical literature in relevant debates, one take-home message might be that we might be required to implement less individualistic notions of responsibility, such as distributed or collective responsibility, to close potential ‘responsibility gaps’ 28–30. Having said this, it remains unclear how these less individualistic notions of responsibility might translate into the legal system.…”
Section: Pitfalls Of Algorithmic Decision-making At the Structural Lementioning
confidence: 99%
See 1 more Smart Citation
“…The obscuration of accountability just described has its counterparts in many other domains where artificial intelligence is being deployed—most notably self-driving vehicles. By looking at the ethical literature in relevant debates, one take-home message might be that we might be required to implement less individualistic notions of responsibility, such as distributed or collective responsibility, to close potential ‘responsibility gaps’ 28–30. Having said this, it remains unclear how these less individualistic notions of responsibility might translate into the legal system.…”
Section: Pitfalls Of Algorithmic Decision-making At the Structural Lementioning
confidence: 99%
“…Thus, if we have good enough reasons to believe that involving algorithms in medicine promotes more reliable decision-making, does that justify their deployment on consequentialist grounds? In this vein, how should we balance the values of transparency and evidence on the one hand, and reliability and efficacy on the other30 (for a more sceptical view cf. Stegenga35)?…”
Section: Pitfalls Of Algorithmic Decision-making At the Structural Lementioning
confidence: 99%
“…The actions taken in such situations have potentially harmful consequences for car occupants, other traffic participants, and pedestrians. Therefore, it is important to carefully consider the ethics of how self-driving cars will be designed to make decisions, an issue that is the topic of current debate (Nyholm, 2018a,b; Dietrich and Weisswange, 2019; Keeling et al, 2019).…”
Section: Introductionmentioning
confidence: 99%
“…Most SDVs are supervised autonomous systems, which refers to systems that require human intervention to operate properly. Even with human supervision, this particular application in AI is controversial due to safety concerns since human lives is at stake [17][18]. However, despite the safety concerns, the steady development and research in safety measures in SDVs are slowly gaining the trust of drivers and passengers.…”
Section: Self-driving Vehiclesmentioning
confidence: 99%