2022
DOI: 10.48550/arxiv.2202.08979
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

The Response Shift Paradigm to Quantify Human Trust in AI Recommendations

Abstract: Explainability, interpretability and how much they affect human trust in AI systems are ultimately problems of human cognition as much as machine learning, yet the effectiveness of AI recommendations and the trust afforded by end-users are typically not evaluated quantitatively. We developed and validated a general purpose Human-AI interaction paradigm which quantifies the impact of AI recommendations on human decisions. In our paradigm we confronted human users with quantitative prediction tasks: asking them … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
5
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(5 citation statements)
references
References 32 publications
0
5
0
Order By: Relevance
“…Based on the fact that the literature to date has produced mixed results, we concluded that context-specific (confounding) factors have been overlooked so far that may explain why some studies find significant results and others report non-significant relationships. One reason could be the actual or the perceived performance of the AI (Shafti et al, 2022). Another factor could be the type of explanation (e.g., David et al, 2021;Lai et al, 2020;van der Waa et al, 2021) that leads participants to follow the system's advice more often.…”
Section: Discussion Of the Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…Based on the fact that the literature to date has produced mixed results, we concluded that context-specific (confounding) factors have been overlooked so far that may explain why some studies find significant results and others report non-significant relationships. One reason could be the actual or the perceived performance of the AI (Shafti et al, 2022). Another factor could be the type of explanation (e.g., David et al, 2021;Lai et al, 2020;van der Waa et al, 2021) that leads participants to follow the system's advice more often.…”
Section: Discussion Of the Resultsmentioning
confidence: 99%
“…The participants' task in the study was to identify hateful content via the user interface to detect hate speech. Shafti et al (2022) observed that good explanations of XAI can lead to a significantly lower error rate, a higher human performance and higher user confidence in AI. In their experimental study, a grade prediction task of students was used based on tabular data about the student's background (e.g., parents' jobs or weekly study time).…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Particularly, end users’ decision to act on or dismiss AI recommendations may be attached to some human-centred AI design characteristics and the degree of AI explainability. 21 22 Human factor aspects are central in AI-based decision support systems in safety critical applications, 23 prompting us to keep actively engineering safety into AI systems.…”
Section: Discussionmentioning
confidence: 99%
“…There are a number of studies that report positive effects of explanations on human perception [23,[41][42][43][44][45][46][47]. These experiments investigated tasks in various modalities and ranged from asking users to detect hate speech, predict students' grades, predict an agent's performance in frozen frames of video games, or decide on (virtual) patients' insulin intake.…”
Section: Human Perception and Xaimentioning
confidence: 99%