Proceedings of the 25th International Conference on Intelligent User Interfaces 2020
DOI: 10.1145/3377325.3377480
|View full text |Cite
|
Sign up to set email alerts
|

How do visual explanations foster end users' appropriate trust in machine learning?

Abstract: Figure 1: Examples of the visual explanations in our experiment-We tested two ways to represent an example instance: (a) an image or (b) a rose chart of features, and three spatial layouts to arrange the instances: (c) grid, (d) tree, and (e) graph. Three images (c-e) here show explanations of the same instances, classifier, and classification recommendation.

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
54
1

Year Published

2020
2020
2024
2024

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 120 publications
(69 citation statements)
references
References 101 publications
0
54
1
Order By: Relevance
“…An essential step towards this goal is to use explanations to guide people to trust an AI model when it is right and not to trust it when it is wrong. In other words, with the assistance of model explanations, people should have better capability of calibrating their trust in the model [69,71]. Note that when an explanation simply improves the human-AI joint decision making accuracy, it does not necessarily mean this desideratum is satisfied.…”
Section: Literature Reviewmentioning
confidence: 99%
See 2 more Smart Citations
“…An essential step towards this goal is to use explanations to guide people to trust an AI model when it is right and not to trust it when it is wrong. In other words, with the assistance of model explanations, people should have better capability of calibrating their trust in the model [69,71]. Note that when an explanation simply improves the human-AI joint decision making accuracy, it does not necessarily mean this desideratum is satisfied.…”
Section: Literature Reviewmentioning
confidence: 99%
“…Lai et al [41] deception detection feature contribution N/A N/A ✓? Cai et al [13] drawing recognition example-based mixed results N/A N/A Yang et al [69] leaf classification example-based N/A N/A ✓ Note: "N/A" means the study does not examine the desideratum. ✓ (or ✗) means the study finds (or does not find) evidence suggesting the explanation method it examines satisfies a desideratum.…”
Section: Literature Reviewmentioning
confidence: 99%
See 1 more Smart Citation
“…Many applications involving machine and deep learning algorithms provide post-hoc explanations of why a decision refused mortgage or parole requests. However, exploratory user interfaces using interactive visual designs offer a more likely path to successful customer adoption and acceptance (Chatzimparmpas et al, 2020;Hohman et al, 2018;Nourashrafeddin et al, 2018;Yang et al, 2020). Well-designed interactive visual interfaces will improve the work of machine learning algorithm developers and facilitate comprehension by various stakeholders.…”
Section: Figure 7 Cliché-ridden Images Of Humanoid Robot Hands and Smentioning
confidence: 99%
“…The generated natural language rationales outperformed a baseline on ratings of Confidence, Human-Like, Adequately Justified, and Understandable with human judges when observing AI play traces of the arcade game alongside different text rationales. In general, many XAI studies test for user trust in the model to make correct decisions [18,40] and user confidence in the decision-making process [1,17]. Conversely, our study measures intelligibility to an outside observer of a complex, multi-agent system where players act and react to each other while pursuing adversarial goals in real time.…”
Section: Background and Related Workmentioning
confidence: 99%