2022
DOI: 10.1145/3495013
|View full text |Cite
|
Sign up to set email alerts
|

It’s Complicated: The Relationship between User Trust, Model Accuracy and Explanations in AI

Abstract: Automated decision-making systems become increasingly powerful due to higher model complexity. While powerful in prediction accuracy, Deep Learning models are black boxes by nature, preventing users from making informed judgments about the correctness and fairness of such an automated system. Explanations have been proposed as a general remedy to the black box problem. However, it remains unclear if effects of explanations on user trust generalise over varying accuracy levels. In an online user study with 959 … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
23
1

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 50 publications
(34 citation statements)
references
References 44 publications
1
23
1
Order By: Relevance
“…Yu et al [73] manipulated system accuracy on four levels (70%, 80%, 90%, and 100%), giving false positive/negative advice accordingly: trust decreased only in the 70 percent condition. Papenmeier et al [58] compared trust levels of participants that interacted with a high, medium, and low ("antagonistic") accuracy and found that participants indeed showed adequate levels of trust, in line with Yu and colleagues. To summarize, individuals can distinguish between appropriate and poor AI advice based on its accuracy, which is a crucial prerequisite for optimal trust calibration.…”
Section: Trust In Ai: System Accuracymentioning
confidence: 60%
See 2 more Smart Citations
“…Yu et al [73] manipulated system accuracy on four levels (70%, 80%, 90%, and 100%), giving false positive/negative advice accordingly: trust decreased only in the 70 percent condition. Papenmeier et al [58] compared trust levels of participants that interacted with a high, medium, and low ("antagonistic") accuracy and found that participants indeed showed adequate levels of trust, in line with Yu and colleagues. To summarize, individuals can distinguish between appropriate and poor AI advice based on its accuracy, which is a crucial prerequisite for optimal trust calibration.…”
Section: Trust In Ai: System Accuracymentioning
confidence: 60%
“…Although the majority of studies found positive effects of AI explanations on trust, some studies observed feelings of manipulation by AI [9] or subsequent overconfidence in AI [69]. Papenmeier et al [57,58] find evidence that not all explanations are helpful, and some might even be harmful: they discovered that adding nonsensical or random explanations hurt trust (as one would hope). Furthermore, their results show that explanations do not improve trust when individuals interact with a sufficiently accurate system.…”
Section: Trust In Ai: Explaining Ai Outputmentioning
confidence: 99%
See 1 more Smart Citation
“…There have been many approaches towards the definition and measurement of trust [46,63,82,112]. Trust is seen as a construct that is relevant both for relationships among humans and between humans and machines.…”
Section: Trust and Reliancementioning
confidence: 99%
“…First, there is no widely accepted definition for trust in intelligent systems, although many definitions have been proposed [72][73][74]. Second, measuring trust is very challenging because it evolves [75][76][77] and is affected by many factors [78], for example, domain expertise [75,77], visualised information and uncertainty [48,79], model accuracy [80,81], and level of transparency [82]. In addition, there is growing consensus among XAI researchers that optimising trust is not always desirable; rather, the stress should lie on appropriate trust [58] and trust calibration [83,84].…”
Section: Trust In Intelligent Systemsmentioning
confidence: 99%