2021
DOI: 10.48550/arxiv.2101.09498
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Show or Suppress? Managing Input Uncertainty in Machine Learning Model Explanations

Abstract: Feature attribution is widely used in interpretable machine learning to explain how influential each measured input feature value is for an output inference. However, measurements can be uncertain, and it is unclear how the awareness of input uncertainty can affect the trust in explanations. We propose and study two approaches to help users to manage their perception of uncertainty in a model explanation: 1) transparently show uncertainty in feature attributions to allow users to reflect on, and 2) suppress at… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 70 publications
0
2
0
Order By: Relevance
“…Finally, while explanations were proven to be effective in leading the user in achieving a task and improving their trust and understanding of the model, it has also been demonstrated that sometimes they are either not able to improve [97,98] or, worse, they reduce human accuracy and trust [99]. A similar result in a different context was found by Dinu et al [100].…”
Section: Understanding the Human's Perspective In Explainable Aimentioning
confidence: 63%
“…Finally, while explanations were proven to be effective in leading the user in achieving a task and improving their trust and understanding of the model, it has also been demonstrated that sometimes they are either not able to improve [97,98] or, worse, they reduce human accuracy and trust [99]. A similar result in a different context was found by Dinu et al [100].…”
Section: Understanding the Human's Perspective In Explainable Aimentioning
confidence: 63%
“…They find that users resist advice from an algorithm more when the environment is more uncertain. D. Wang et al (2021) find that communicating the level of uncertainty to users does not affect trust, confidence, or decision-making quality, but users spend more time making decisions when they receive information about uncertainty. In their work, they use uncertainty as the level of input certainty captured by the model rather than irreducible environmental uncertainty.…”
Section: Related Studiesmentioning
confidence: 99%