Designing Interactive Systems Conference 2021 2021
DOI: 10.1145/3461778.3462131
|View full text |Cite
|
Sign up to set email alerts
|

Who needs to know what, when?: Broadening the Explainable AI (XAI) Design Space by Looking at Explanations Across the AI Lifecycle

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
26
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3
2
1

Relationship

0
10

Authors

Journals

citations
Cited by 49 publications
(28 citation statements)
references
References 53 publications
2
26
0
Order By: Relevance
“…This contradiction implies that being able to successfully apply an explanation does not necessarily enhance a user's assessment of the XAI system. These findings support discourse from prior work on human-centered or user-centered perspectives to explainability [20,24,67]. Participants' preference towards using modes of explanation which objectively perform poorer on task performance metrics is a clear indicator that explanations need to consider the individual dispositions of the potential end-user to engender adoption.…”
Section: Qualitative Insights and Discussionsupporting
confidence: 78%
“…This contradiction implies that being able to successfully apply an explanation does not necessarily enhance a user's assessment of the XAI system. These findings support discourse from prior work on human-centered or user-centered perspectives to explainability [20,24,67]. Participants' preference towards using modes of explanation which objectively perform poorer on task performance metrics is a clear indicator that explanations need to consider the individual dispositions of the potential end-user to engender adoption.…”
Section: Qualitative Insights and Discussionsupporting
confidence: 78%
“…Participants' questions reflected their desire for an actionable or utility-oriented understanding to support their end goal of optimizing code generation and programming productivity [18,50,52], such as asking the Input, Output, How and How-to XAI questions (described in Section 4) to facilitate strategies to get better outputs from the AI. This actionable understanding can also be supported by enabling follow-up actions towards their goals after seeing transparent information.…”
Section: Discussion 61 Informing Xai Approaches For Genai For Codementioning
confidence: 99%
“…There is no one-size-fits-all explanation, however. Human-centred XAI researchers therefore study how explanations can be effectively designed, considering factors such as the application context [59][60][61], human reasoning processes [62], and end users' goals [63] or personal characteristics [61,64].…”
Section: Visualisation For Explainable Artificial Intelligencementioning
confidence: 99%