26th International Conference on Intelligent User Interfaces 2021
DOI: 10.1145/3397481.3450650
|View full text |Cite
|
Sign up to set email alerts
|

Are Explanations Helpful? A Comparative Study of the Effects of Explanations in AI-Assisted Decision-Making

Abstract: This paper contributes to the growing literature in empirical evaluation of explainable AI (XAI) methods by presenting a comparison on the effects of a set of established XAI methods in AI-assisted decision making. Specifically, based on our review of previous literature, we highlight three desirable properties that ideal AI explanations should satisfy-improve people's understanding of the AI model, help people recognize the model uncertainty, and support people's calibrated trust in the model. Through randomi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
53
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 121 publications
(53 citation statements)
references
References 79 publications
0
53
0
Order By: Relevance
“…Humans are typically asked to simulate model predictions given an input and some explanations [2, 9, 14, 15, 18, 20, 25, 35, 37-39, 45, 46, 50, 52, 58]. For example, given profiles of criminal defendants and machine explanations, participants are asked to guess what the AI model would predict [58].…”
Section: Three Core Concepts For Measuring Human Understandingmentioning
confidence: 99%
See 3 more Smart Citations
“…Humans are typically asked to simulate model predictions given an input and some explanations [2, 9, 14, 15, 18, 20, 25, 35, 37-39, 45, 46, 50, 52, 58]. For example, given profiles of criminal defendants and machine explanations, participants are asked to guess what the AI model would predict [58].…”
Section: Three Core Concepts For Measuring Human Understandingmentioning
confidence: 99%
“…Measuring human understanding of model decision boundary via feature importance. Additionally, Wang and Yin [58] also tested human understanding of model decision boundary via feature importance, specifically by (1) asking the participants to select among a list of features which one was most/least influential on the model's predictions and (2) specifying a feature's marginal effect on predictions. Ribeiro et al [51] asked participants to perform feature engineering by identifying features to remove, given the LIME explanations.…”
Section: Three Core Concepts For Measuring Human Understandingmentioning
confidence: 99%
See 2 more Smart Citations
“…[37, p. 20]. Wang and Yin [95] examined the role of explanations in AI-supported decision making and found that various XAI methods were ineffective in supporting human decision makers on tasks for which they had limited domain expertise. Therefore, it is not always the case that the support provided by an AI model improves human decision-making quality.…”
Section: Improved Outcomes Decision Makingmentioning
confidence: 99%