2019
DOI: 10.1007/s11023-019-09509-3
|View full text |Cite
|
Sign up to set email alerts
|

A Misdirected Principle with a Catch: Explicability for AI

Abstract: There is widespread agreement that there should be a principle requiring that artificial intelligence (AI) be 'explicable'. Microsoft, Google, the World Economic Forum, the draft AI ethics guidelines for the EU commission, etc. all include a principle for AI that falls under the umbrella of 'explicability'. Roughly, the principle states that "for AI to promote and not constrain human autonomy, our 'decision about who should decide' must be informed by knowledge of how AI would act instead of us" (Floridi et al… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
100
1

Year Published

2020
2020
2022
2022

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 110 publications
(103 citation statements)
references
References 21 publications
2
100
1
Order By: Relevance
“…Controllability Retain (complete) human control concerning a system User [6,12,13,22,44,49,56,60,74] [39]…”
Section: Confidencementioning
confidence: 99%
See 2 more Smart Citations
“…Controllability Retain (complete) human control concerning a system User [6,12,13,22,44,49,56,60,74] [39]…”
Section: Confidencementioning
confidence: 99%
“…Privacy Assess and increase a system's privacy practices User [14,16,78,98] Responsibility Provide appropriate means to let humans remain responsible or to increase perceived responsibility Regulator [6,13,20,43,56,57,60,104] [68]…”
Section: Confidencementioning
confidence: 99%
See 1 more Smart Citation
“…This emphasizes another aspect of digital microvirtue ethics, which is the ability to identify the relevant stakeholders and not just focusing on the immediate recipients of the machine learning system. This also implies that, sometimes, we identify relevant stakeholders but we think that there are no particular moral concerns attached to them (Robbins 2019). With this ability, we direct moral attention and concern to those we think deserve it.…”
Section: Foregrounding Virtuous Behaviormentioning
confidence: 99%
“…Superintelligence) and amounted to the ethics of fanciful scenarios of robot uprisings [6]. The second wave of AI ethics addressed the practical concerns of machine learning (ML) techniques: the black-box algorithm and the problem of explainability [9,16], the lack of equal representation in training data and the resulting biases in AI models [2,7], and the increase in facial and emotion recognition systems infringing on citizen's rights (e.g. privacy) [4].…”
Section: Introductionmentioning
confidence: 99%