2019
DOI: 10.1007/s11023-019-09502-w
|View full text |Cite
|
Sign up to set email alerts
|

The Pragmatic Turn in Explainable Artificial Intelligence (XAI)

Abstract: In this paper I argue that the search for explainable models and interpretable decisions in AI must be reformulated in terms of the broader project of offering a pragmatic and naturalistic account of understanding in AI. Intuitively, the purpose of providing an explanation of a model or a decision is to make it understandable to its stakeholders. But without a previous grasp of what it means to say that an agent understands a model or a decision, the explanatory strategies will lack a well-defined goal. Aside … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
97
1
3

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
4

Relationship

0
8

Authors

Journals

citations
Cited by 166 publications
(113 citation statements)
references
References 48 publications
1
97
1
3
Order By: Relevance
“…Since we argue in Section 3 that ANNs are explainable using the traditional models described above, it may be argued that our account of interpretability could inherit some of these issues. We agree that the causal-interventionist account may be useful for solving some questions about explainability, understandability, and interpretability in AI (Páez 2019). Indeed, we describe interpretability processes in Section 4.3 some of which result in understandable explanations which include counterfactual information, however, the causal-interventionist account includes pragmatic elements that we maintain should not count against a phenomenon's explainability, namely that an explanation should be "deeper or more satisfying" (Woodward 2003: 190) than those provided by DN, IS, or CM explanations.…”
Section: Four Kinds Of Explanationmentioning
confidence: 66%
See 2 more Smart Citations
“…Since we argue in Section 3 that ANNs are explainable using the traditional models described above, it may be argued that our account of interpretability could inherit some of these issues. We agree that the causal-interventionist account may be useful for solving some questions about explainability, understandability, and interpretability in AI (Páez 2019). Indeed, we describe interpretability processes in Section 4.3 some of which result in understandable explanations which include counterfactual information, however, the causal-interventionist account includes pragmatic elements that we maintain should not count against a phenomenon's explainability, namely that an explanation should be "deeper or more satisfying" (Woodward 2003: 190) than those provided by DN, IS, or CM explanations.…”
Section: Four Kinds Of Explanationmentioning
confidence: 66%
“…(Lipton 2018) We contend that much confusion in the debate about, and push for, XAI can be attributed to a conflation of explainability and interpretability. Interpretation has been devised and variously defined within the sciences themselves (Ribeiro et al 2016;Mittelstadt et al 2019;Lipton 2018;Krishnan 2019;Páez 2019). We cannot fault philosophers or scientists for misunderstanding the scientific notion of interpretation since there is no single such notion to rely on.…”
Section: Interpretabilitymentioning
confidence: 99%
See 1 more Smart Citation
“…Pragmatic goals require pragmatic strategies. Because iML is fundamentally about getting humans to understand the behaviour of machines, there is a growing call for personalised solutions (Páez 2019). We take this pragmatic turn seriously and propose formal methods to implement it.…”
Section: Pragmatism + Pluralism Relativist Anarchy?mentioning
confidence: 99%
“…The notion of self-x-capacity is related to the program of organic computing, where the notion of self-x-property is standardly used (self-repairing 1 The recent developments in AI only start to get the full attention from philosophy of science and philosophy of mind and cognition that they deserve: cf. Buckner (2018Buckner ( , 2019, López-Rubio (2018), Páez (2019), Schubbach and Arno (2019), and Zednik (2019). 2 I chose the term "AI state space", or sometimes just "AI space" for short.…”
Section: Introductionmentioning
confidence: 99%