2021
DOI: 10.1007/s11280-021-00916-0
|View full text |Cite
|
Sign up to set email alerts
|

Explainable recommendation: when design meets trust calibration

Abstract: Human-AI collaborative decision-making tools are being increasingly applied in critical domains such as healthcare. However, these tools are often seen as closed and intransparent for human decision-makers. An essential requirement for their success is the ability to provide explanations about themselves that are understandable and meaningful to the users. While explanations generally have positive connotations, studies showed that the assumption behind users interacting and engaging with these explanations co… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
11
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
2
1

Relationship

2
4

Authors

Journals

citations
Cited by 16 publications
(11 citation statements)
references
References 88 publications
0
11
0
Order By: Relevance
“…However, also interdisciplinary approaches have been presented that incorporate cognitive psychology and philosophy in order to reframe and reflect the motivations and reasons why explanations are to be used in intelligent systems [54]. Recently, there have been approaches to design for explainability of advanced machine learning to non-expert users [15,27,75]. So far, the learnings have not been transferred to specific application areas, such as construction.…”
Section: Strategies For Trust Calibrationmentioning
confidence: 99%
See 1 more Smart Citation
“…However, also interdisciplinary approaches have been presented that incorporate cognitive psychology and philosophy in order to reframe and reflect the motivations and reasons why explanations are to be used in intelligent systems [54]. Recently, there have been approaches to design for explainability of advanced machine learning to non-expert users [15,27,75]. So far, the learnings have not been transferred to specific application areas, such as construction.…”
Section: Strategies For Trust Calibrationmentioning
confidence: 99%
“…Interestingly, the analysis by Zhang et al [114] represents only one of few studies that consider the impact on domain experts rather than endusers. For clinical decision support systems, [75] found that explanations did not always effectively support users in calibrating their trust, due to conflicts with usability and required efforts.…”
Section: Effectiveness Of Trust Calibration Communicationmentioning
confidence: 99%
“…We also aim at broadening discussions on explainable AI for collaborative decision-making and paving the way for more research on how to customise and contextualise explanations so that they fit users' needs and expectations of each of their different classes. Compared to our previous work (Naiseh et al, 2021a;Naiseh et al, 2021b), this study adds a quantitative evaluation of different XAI classes and their effect on users' trust calibration. It also elicits XAI interface requirements correlated with particular XAI classes.…”
Section: Introductionmentioning
confidence: 99%
“…We first explored what errors users make while interacting with XAI interfaces (Naiseh et al, 2021b). We then conducted an explanatory study to discover potential design solutions to mitigate users' errors (Naiseh et al, 2021a).…”
Section: Introductionmentioning
confidence: 99%
“…Developed by AI and Human Computer Interaction (HCI) researchers, XAI is a user-centric field of study aimed at developing techniques to make the functioning of these systems and models more transparentand consequently more reliable [2]. Recent research shows that the trust calibration on the models' decision is very important, since exaggerated or measured confidence can lead to critical problems depending on the context [19].…”
Section: Introductionmentioning
confidence: 99%