2020
DOI: 10.1007/s13218-020-00637-y
|View full text |Cite
|
Sign up to set email alerts
|

One Explanation Does Not Fit All

Abstract: The need for transparency of predictive systems based on Machine Learning algorithms arises as a consequence of their ever-increasing proliferation in the industry. Whenever black-box algorithmic predictions influence human affairs, the inner workings of these algorithms should be scrutinised and their decisions explained to the relevant stakeholders, including the system engineers, the system's operators and the individuals whose case is being decided. While a variety of interpretability and explainability me… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

3
71
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
3
3
1

Relationship

0
7

Authors

Journals

citations
Cited by 102 publications
(74 citation statements)
references
References 29 publications
3
71
0
Order By: Relevance
“…Following the call to extend XAI to social and interactive approaches, novel developments in XAI acknowledge the diversity in explainees in terms of their expectations, interests, and needs as a way to personalize explanations [14,15,11,16,7]. These approaches characterize the explainee in terms of a number of variables that specify the person's characteristics…”
Section: B Social Interaction Is Not Just About Personalizationmentioning
confidence: 99%
See 3 more Smart Citations
“…Following the call to extend XAI to social and interactive approaches, novel developments in XAI acknowledge the diversity in explainees in terms of their expectations, interests, and needs as a way to personalize explanations [14,15,11,16,7]. These approaches characterize the explainee in terms of a number of variables that specify the person's characteristics…”
Section: B Social Interaction Is Not Just About Personalizationmentioning
confidence: 99%
“…To sum up the above-mentioned limitations, current research in XAI lacks a conceptual framework that would account for both explaining as a bidirectional social process and the dynamics of understanding in everyday explanations. A conceptual basis that allows one to assess and describe the process of explaining could enhance the design of AI systems tremendously [7] by orienting them toward the production of socially relevant explanations that cover not only specific points of interest in the addressee, but also the general dynamics of the process taking place on different levels. A conceptual basis is necessary, because in a recent review, Anjomoshoae et al [6, p. 1082] revealed that 39% of research concerned with explainable and intelligent agents "did not rely on any theoretical background related to generating explanations"thereby strongly suggesting that current theories might be lagging behind what designers of AI systems already recognize as being more appropriate.…”
Section: Scientific Explanations Are Not Everyday Explanationsmentioning
confidence: 99%
See 2 more Smart Citations
“…Google's people + AI Guidebook has described the best practices for designing human centered AI products and acknowledging the importance of interaction and explainability 17 . There are also other AI systems that personalize explanation in interactive environment (Akula et al, 2019;Schneider and Handali, 2019;Sokol and Flach, 2020), but the necessity of tailoring explanation to the need of different patients has not been discussed in the past literature. In my interview study, physicians often stated that they develop their explanation considering the patient's emotional, cultural, or socioeconomic status, they also have to keep in mind the intellectual level of the patients.…”
Section: Tailoring Explanation To Suit Different Patientsmentioning
confidence: 99%