2021
DOI: 10.1016/j.artint.2021.103473
|View full text |Cite
|
Sign up to set email alerts
|

What do we want from Explainable Artificial Intelligence (XAI)? – A stakeholder perspective on XAI and a conceptual model guiding interdisciplinary XAI research

Abstract: Previous research in Explainable Artificial Intelligence (XAI) suggests that a main aim of explainability approaches is to satisfy specific interests, goals, expectations, needs, and demands regarding artificial systems (we call these 'stakeholders' desiderata') in a variety of contexts. However, the literature on XAI is vast, spreads out across multiple largely disconnected disciplines, and it often remains unclear how explainability approaches are supposed to achieve the goal of satisfying stakeholders' desi… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
191
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
2
1

Relationship

1
8

Authors

Journals

citations
Cited by 295 publications
(240 citation statements)
references
References 166 publications
1
191
0
Order By: Relevance
“…However, the traits that render an explanation satisfying are not independent of the audience’s characteristics and expectations. To this end, a series of recent papers address this exact question ( van den Berg and Kuiper, 2020 ; Langer et al, 2021 ), highlighting the need to consider the point of view of the various stakeholders. As a consequence, explanations should be tailored to the specific audience they are intended for, aiming at conveying the necessary information in a clear way.…”
Section: Future Directionsmentioning
confidence: 99%
“…However, the traits that render an explanation satisfying are not independent of the audience’s characteristics and expectations. To this end, a series of recent papers address this exact question ( van den Berg and Kuiper, 2020 ; Langer et al, 2021 ), highlighting the need to consider the point of view of the various stakeholders. As a consequence, explanations should be tailored to the specific audience they are intended for, aiming at conveying the necessary information in a clear way.…”
Section: Future Directionsmentioning
confidence: 99%
“…Investigating these trade-offs of stakeholders' interests provides a fruitful direction for future research (Langer et al, 2021).…”
Section: Limitations and Main Implicationsmentioning
confidence: 99%
“…Where the ML-based systems are used by human experts to inform decision-making, a local and contemporaneous explanation will be required in order to decide whether to act on a specific prediction in real time. Consent and Control . XAI methods may also play a role in enabling stakeholders to better exercise their own human autonomy in relation to an ML-model [24]. Appropriate explanations could enable users to give their informed consent to recommendations by ML-based personal assistants, for example, or, in the case of an AV, to understand a transition demand sufficiently to resume effective hands-on control of the system.…”
Section: Stakeholders and Explanationsmentioning
confidence: 99%