2023
DOI: 10.1038/s41598-023-28633-w
|View full text |Cite
|
Sign up to set email alerts
|

Non-task expert physicians benefit from correct explainable AI advice when reviewing X-rays

Abstract: Artificial intelligence (AI)-generated clinical advice is becoming more prevalent in healthcare. However, the impact of AI-generated advice on physicians’ decision-making is underexplored. In this study, physicians received X-rays with correct diagnostic advice and were asked to make a diagnosis, rate the advice’s quality, and judge their own confidence. We manipulated whether the advice came with or without a visual annotation on the X-rays, and whether it was labeled as coming from an AI or a human radiologi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

1
17
1

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
1
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 32 publications
(23 citation statements)
references
References 26 publications
1
17
1
Order By: Relevance
“…This is especially relevant for translating learning-based computer-aided diagnostic/screening systems to routine clinical care. 15 Particularly, multi-label classification tasks, where the co-occurrence of disease makes deep learning models more vulnerable to reliance on confounding factors. Typical approaches to multi-label image classification explainability, including GradCAM, 16 have been criticised for their inability to highlight smaller pathologies or structures with complex shapes, for example, mechanical wiring.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…This is especially relevant for translating learning-based computer-aided diagnostic/screening systems to routine clinical care. 15 Particularly, multi-label classification tasks, where the co-occurrence of disease makes deep learning models more vulnerable to reliance on confounding factors. Typical approaches to multi-label image classification explainability, including GradCAM, 16 have been criticised for their inability to highlight smaller pathologies or structures with complex shapes, for example, mechanical wiring.…”
Section: Discussionmentioning
confidence: 99%
“…Figure 2: 3-dimensional probability density functions from the Dirichlet distribution, each with different α values.The values of α are set to (0.5, 0.5, 0.5), (0.9, 0.9, 0.9) and(2,5,15).…”
mentioning
confidence: 99%
“…In all experiments, the three dependent variables were based on established measures (see Gaube et al, 2021Gaube et al, , 2023.…”
Section: Dependent Variablesmentioning
confidence: 99%
“…While some previous studies found that providing explanations can positively affect performance in a decision-making task (Gaube et al, 2023;Pessach et al, 2020), others did not (e.g., van der Waa et al, 2021. More research regarding this discrepancy is needed to investigate the role of explainable AI advice in decision-making and how explanations ought to be presented.…”
Section: Introductionmentioning
confidence: 95%
See 1 more Smart Citation