2022
DOI: 10.31219/osf.io/68emr
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Explainable Artificial Intelligence improves human decision-making: Results from a mushroom picking experiment at a public art festival

Abstract: Explainable Artificial Intelligence (XAI) enables an Artificial Intelligence (AI) to explain its decisions. This holds the promise of making AI more understandable to users, improving interaction, and establishing an adequate level of trust. We tested this assertion in the high-risk task of mushroom hunting, where users have to decide whether a mushroom is edible or poisonous with the aid of an AI-based app that suggests classifications based on mushroom images. In a between-subjects experiment N = 328 visitor… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 44 publications
0
1
0
Order By: Relevance
“…2,3 However, most studies that practically evaluate whether and how explanations affect expert decision-making focus on general problems with lessons that do not necessarily translate to high complexity tasks in the clinical sphere. 4,5 In the few cases where medical XAI has been investigated with clinical experts, these have tended to focus on diagnostic scenarios for which a pre-existing gold standard exists with which to calculate accuracy. [6][7][8] This is not the case for many non-diagnostic medical problems such as the haemodynamic management of sepsis that affects millions of patients worldwide.…”
Section: Introductionmentioning
confidence: 99%
“…2,3 However, most studies that practically evaluate whether and how explanations affect expert decision-making focus on general problems with lessons that do not necessarily translate to high complexity tasks in the clinical sphere. 4,5 In the few cases where medical XAI has been investigated with clinical experts, these have tended to focus on diagnostic scenarios for which a pre-existing gold standard exists with which to calculate accuracy. [6][7][8] This is not the case for many non-diagnostic medical problems such as the haemodynamic management of sepsis that affects millions of patients worldwide.…”
Section: Introductionmentioning
confidence: 99%
“…XAI XAI, standing for eXplainable(X) Artificial(A) Intelligence(I), is "one that produces details or reasons to make its functioning clear or easy to understand" [4]. Leichtmann revealed that explanations of the AI's predictions would lead to a statistically significant better performance of decision-making through a mushroom-picking task [5]. Hudon also found that transparent and explainable AI systems could result in improved confidence between human and AI, which further have a positive impact on decision-making [6].…”
mentioning
confidence: 99%