2018
DOI: 10.48550/arxiv.1810.00184
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Stakeholders in Explainable AI

Abstract: There is general consensus that it is important for artificial intelligence (AI) and machine learning systems to be explainable and/or interpretable. However, there is no general consensus over what is meant by 'explainable' and 'interpretable'. In this paper, we argue that this lack of consensus is due to there being several distinct stakeholder communities. We note that, while the concerns of the individual communities are broadly compatible, they are not identical, which gives rise to different intents and … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
59
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
6
3

Relationship

1
8

Authors

Journals

citations
Cited by 42 publications
(59 citation statements)
references
References 7 publications
(19 reference statements)
0
59
0
Order By: Relevance
“…A starting point to address these challenges is to map out the design space of XAI and develop frameworks that account for people's diverse explainability needs. Many have summarized common user groups that demand explainability and what they would use AI explanations for [11,49,83]:…”
Section: Diverse Explainability Needs Of Ai Stakeholdersmentioning
confidence: 99%
“…A starting point to address these challenges is to map out the design space of XAI and develop frameworks that account for people's diverse explainability needs. Many have summarized common user groups that demand explainability and what they would use AI explanations for [11,49,83]:…”
Section: Diverse Explainability Needs Of Ai Stakeholdersmentioning
confidence: 99%
“…Accordingly, one of the reflection points is the audience and stakeholders involved in XAI. Preece et al (2018) attribute the fact that there is no consensus for explainability and interpretability to the fact that different stakeholder communities have to deal with them, and further depict where this perspective overlap and where not. Moreover, Samek, Wiegand & Muller (2017) identify as reasons for needing XAI the following: system verification, learning from the system, and compliance to legislation.…”
Section: Related Researchmentioning
confidence: 99%
“…Typical categorizations of stakeholders are based on their role in an organization [3,9,12,16], their machine learning experience [18] or a combination of the two [15]. Also for the categorization of the stakeholder needs regarding explainability, different propositions are made in the literature.…”
Section: Understanding Stakeholder Needsmentioning
confidence: 99%
“…Some authors mention possible high-level goals of explainability, such as model debugging, monitoring etc. [3] or revealing (un)known (un)knowns [12]. Langer et al [9] provide a list of more detailed needs such as privacy, fairness, legal compliance, etc.…”
Section: Understanding Stakeholder Needsmentioning
confidence: 99%