2021 8th International Conference on Behavioral and Social Computing (BESC) 2021
DOI: 10.1109/besc53957.2021.9635271
|View full text |Cite
|
Sign up to set email alerts
|

Nudging through Friction: An Approach for Calibrating Trust in Explainable AI

Abstract: Explainability has become an essential requirement for safe and effective collaborative Human-AI environments, especially when generating recommendations through black-box modality. One goal of eXplainable AI (XAI) is to help humans calibrate their trust while working with intelligent systems, i.e., avoid situations where human decision-makers over-trust the AI when it is incorrect, or under-trust the AI when it is correct. XAI, in this context, aims to help humans understand AI reasoning and decide whether to… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

2
18
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 12 publications
(20 citation statements)
references
References 19 publications
2
18
0
Order By: Relevance
“…For instance, P9 asked to customise the number of data features in Local explanations, "The average pharmacist does not need to see all these factors that the AI is considering, some of them are just simple rules". Our observations are aligned with recent studies that showed that long and redundant explanations made participants skip them (Naiseh et al, 2021c) and decreased participants' satisfaction with the explanation (Narayanan et al, 2018). In summary, explanations easiness of use and their modalities could be critical to engaging users with them when participants' time constraints and the difficulty of the task are primary issues.…”
Section: Usabilitysupporting
confidence: 92%
See 4 more Smart Citations
“…For instance, P9 asked to customise the number of data features in Local explanations, "The average pharmacist does not need to see all these factors that the AI is considering, some of them are just simple rules". Our observations are aligned with recent studies that showed that long and redundant explanations made participants skip them (Naiseh et al, 2021c) and decreased participants' satisfaction with the explanation (Narayanan et al, 2018). In summary, explanations easiness of use and their modalities could be critical to engaging users with them when participants' time constraints and the difficulty of the task are primary issues.…”
Section: Usabilitysupporting
confidence: 92%
“…One approach to successful trust calibration is eXplainable AI (XAI) which refers to an AI component that explains AI recommendations to humans receiving them (Naiseh et al, 2021c). Explainability has been identified as a requirement to promote reliability and trust in the AI output and also to ensure humans remain in control (Holzinger, 2021).…”
Section: Introductionmentioning
confidence: 99%
See 3 more Smart Citations