2020
DOI: 10.1007/978-3-030-49760-6_4
|View full text |Cite
|
Sign up to set email alerts
|

What Are People Doing About XAI User Experience? A Survey on AI Explainability Research and Practice

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
27
0
2

Year Published

2021
2021
2022
2022

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 44 publications
(34 citation statements)
references
References 69 publications
0
27
0
2
Order By: Relevance
“…More generally, all the metrics discussed in this paper should be tested against a user experiment. As shown by [6,8], there is a great variety of possible experimental setups depending on what should be explained, in which context, and for who the explanation is targeted.…”
Section: Discussionmentioning
confidence: 99%
“…More generally, all the metrics discussed in this paper should be tested against a user experiment. As shown by [6,8], there is a great variety of possible experimental setups depending on what should be explained, in which context, and for who the explanation is targeted.…”
Section: Discussionmentioning
confidence: 99%
“…While several prior literature surveys have sought to collate and organize a list of interpretability needs, our framework makes some key advances to provide a more nuanced understanding these needs. First, where prior surveys focus primarily on computer science subdisciplines [44,57,87], our framework incorporates these insights and extends them by looking to the legal literature [31,55,118,132] and research on participatory action and design [50,108]. As a result, our framework is able to surface objectives such as "contesting a decision" (O7) or "understanding how one's data is being used" (O5) that prior surveys did not identify.…”
Section: Distilling Stakeholder Needs Into Goals Objectives and Tasksmentioning
confidence: 99%
“…Part of this disconnect stems from the difficulty in identifying and characterizing different stakeholders and their interpretability needs. A growing body of work has engaged with this problem, proposing an ecosystem of stakeholders [96,113], and conducting literature surveys [44,57,87,131] and interview studies [27,58,114] to understand their goals. Resultant frameworks typically adopt one of two approaches: they either categorize stakeholders by their expertise (using labels such as "experts", "novices", or "nonexperts" [57,87,131]) or by their functional role in the ecosystem (e.g., "executives" and "engineers" [17], model "breakers" and "consumers" [58], or model "operators" and "executors" [113]).…”
Section: Introductionmentioning
confidence: 99%
“…For instance, there are works on developing algorithms and novel DL architectures in XAI to add explainability to the models [ 42 , 43 , 44 , 45 , 46 ]. In comparison, there is also work that considers user experience and user requirements for XAI [ 7 , 8 , 9 , 10 , 47 ], and evaluates algorithms and models with user studies [ 48 ]. However, analyzing and categorizing XAI algorithms is not the focus of this paper.…”
Section: Classifying Hcml Researchmentioning
confidence: 99%
“…Some of the most common non-expert-user issues are explainability [ 6 , 7 , 8 , 9 , 10 ], interpretability [ 11 , 12 , 13 , 14 ], privacy and security [ 15 , 16 , 17 , 18 , 19 , 20 ], reliability [ 21 , 22 , 23 , 24 , 25 ], and fairness [ 26 , 27 , 28 , 29 , 30 , 31 ]. These categories emerged as contributing research areas of HCML, and each plays a role in the broader goal of improving the usability and adoptability of AI systems.…”
Section: Introductionmentioning
confidence: 99%