2021
DOI: 10.48550/arxiv.2112.01016
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

On Two XAI Cultures: A Case Study of Non-technical Explanations in Deployed AI System

Abstract: Explainable AI (XAI) research has been booming, but the question "To whom are we making AI explainable?" is yet to gain sufficient attention. Not much of XAI is comprehensible to non-AI experts, who nonetheless, are the primary audience and major stakeholders of deployed AI systems in practice. The gap is glaring: what is considered "explained" to AI-experts versus non-experts are very different in practical scenarios. Hence, this gap produced two distinct cultures of expectations, goals, and forms of XAI in r… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 35 publications
0
1
0
Order By: Relevance
“…Local explanation approaches can be model-agnostic and used to explain tree models [65]. However, these approaches may be slow or experience sampling variability when used with models that have many input features.…”
Section: Xai Methodsmentioning
confidence: 99%
“…Local explanation approaches can be model-agnostic and used to explain tree models [65]. However, these approaches may be slow or experience sampling variability when used with models that have many input features.…”
Section: Xai Methodsmentioning
confidence: 99%