2021
DOI: 10.1007/s13347-021-00450-x
|View full text |Cite
|
Sign up to set email alerts
|

Four Responsibility Gaps with Artificial Intelligence: Why they Matter and How to Address them

Abstract: The notion of “responsibility gap” with artificial intelligence (AI) was originally introduced in the philosophical debate to indicate the concern that “learning automata” may make more difficult or impossible to attribute moral culpability to persons for untoward events. Building on literature in moral and legal philosophy, and ethics of technology, the paper proposes a broader and more comprehensive analysis of the responsibility gap. The responsibility gap, it is argued, is not one problem but a set of at l… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
57
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 152 publications
(81 citation statements)
references
References 58 publications
(80 reference statements)
0
57
0
Order By: Relevance
“…As incidents occur-and sometimes reoccur-the ability to effectively hold the responsible parties answerable for a system's behaviour is essential for maintaining the public's trust to the technology (Knowles and Richards, 2021). Yet, multiple scholars have raised concerns over an ongoing accountability gap, i.e., current moral and legal frameworks fail to explicitly answer who should be held responsible for the actions taken by an autonomous system and how (Raji et al, 2020;Santoni de Sio and Mecacci, 2021). Although the systems themselves cannot be granted legal personhood and held accountable, the organisations and individuals who may be benefiting through their development, deployment, and use can (Bryson et al, 2017;Solaiman, 2017).…”
Section: Accountability and Responsibilitymentioning
confidence: 99%
“…As incidents occur-and sometimes reoccur-the ability to effectively hold the responsible parties answerable for a system's behaviour is essential for maintaining the public's trust to the technology (Knowles and Richards, 2021). Yet, multiple scholars have raised concerns over an ongoing accountability gap, i.e., current moral and legal frameworks fail to explicitly answer who should be held responsible for the actions taken by an autonomous system and how (Raji et al, 2020;Santoni de Sio and Mecacci, 2021). Although the systems themselves cannot be granted legal personhood and held accountable, the organisations and individuals who may be benefiting through their development, deployment, and use can (Bryson et al, 2017;Solaiman, 2017).…”
Section: Accountability and Responsibilitymentioning
confidence: 99%
“…There may be 'sedimentation' of the technology in the sense that the use of technology itself may recede into the background (Rosenberger and Verbeek, 2017;Lewis, 2021). This phenomenon is well-known since Heidegger and Merleau-Ponty (and later Dreyfus).…”
Section: Ai-time: How Ai Shapes Our Time and Functions As A Time Machinementioning
confidence: 99%
“…2019; Zuboff, 2019;Danaher, 2019;Couldry & Mejias, 2019;Coeckelbergh, 2020;Véliz, 2020;Berberich et al, 2020;Mohamed et al, 2020;Bartoletti, 2020;Crawford, 2021;Santoni de Sio & Mecacci, 2021;Cowls, 2021). However, less attention has been paid to the philosophical nature of AI.…”
Section: Introductionmentioning
confidence: 99%
“…A further argument in favor of a ban states that since IHL is intended to address human behavior in war, systems capable of selecting and engaging targets independently of a human operator make IHL difficult, if not impossible, to apply. Under the current legal framework, the challenge of holding a human liable for the actions of an AWS is said to create an accountability gap [16], which Santoni de Sio and Mecacci explain is just one form of four responsibility gaps presented by intelligent systems [17]. Proponents of a ban on AWSs highlight that the concept of meaningful human control is not only central to the debate about AWS but that the inability to ensure human control of previous weapons technologies has motivated past disarmament treaties [9].…”
Section: Law Applicable To Autonomous Security Systemsmentioning
confidence: 99%