2023
DOI: 10.1038/s41598-023-36435-3
|View full text |Cite
|
Sign up to set email alerts
|

The impact of AI suggestions on radiologists’ decisions: a pilot study of explainability and attitudinal priming interventions in mammography examination

Abstract: Various studies have shown that medical professionals are prone to follow the incorrect suggestions offered by algorithms, especially when they have limited inputs to interrogate and interpret such suggestions and when they have an attitude of relying on them. We examine the effect of correct and incorrect algorithmic suggestions on the diagnosis performance of radiologists when (1) they have no, partial, and extensive informational inputs for explaining the suggestions (study 1) and (2) they are primed to hol… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
4
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 11 publications
(6 citation statements)
references
References 36 publications
1
4
0
Order By: Relevance
“…For each lung nodule, the following lung nodule characteristics were provided: long axis diameter, solidity, margin characteristics, and location. The nodule characteristics were not provided by the AI model and were therefore realistically simulated, which is in agreement with related research [ 40 ] via manual annotation by 2 expert radiologists in consensus. However, the participants were not aware of the simulation; therefore, from the radiologists’ perspective, the characteristics were AI generated as well [ 41 ].…”
Section: Methodssupporting
confidence: 86%
See 1 more Smart Citation
“…For each lung nodule, the following lung nodule characteristics were provided: long axis diameter, solidity, margin characteristics, and location. The nodule characteristics were not provided by the AI model and were therefore realistically simulated, which is in agreement with related research [ 40 ] via manual annotation by 2 expert radiologists in consensus. However, the participants were not aware of the simulation; therefore, from the radiologists’ perspective, the characteristics were AI generated as well [ 41 ].…”
Section: Methodssupporting
confidence: 86%
“…Furthermore, it is useful to analyze whether changes in the number of observed nodules and in malignancy probability are correct based on a reference standard defined by expert radiologists and pathology. This is important because of automation bias, implying that radiologists rely too much on the AI recommendations, has to be prevented [ 40 , 49 ].…”
Section: Discussionmentioning
confidence: 99%
“…In most of the papers, 50% (7 from 14 papers) used heatmaps for visualisation of areas of interest 29–35 and. 36 Additionally, Zhang et al 37 used BI-RADS-Net, Zhang et al 38 and Shen et al 35 used a saliency map, Ortega-Martorell et al 39 used uniform manifold approximation and projection (UMAP), Mital and Nguyen 40 used a tornado diagram, Rezazadeh et al 41 used histogram and Rezazade Mehrizii et al 34 used class activation map (CAM)-based heatmaps.…”
Section: Resultsmentioning
confidence: 99%
“…Explainable/interpretable algorithms used are deep learning explanation algorithms: Of 14 papers, Explainer alone or with Grad-CAM, 29 interpretable deep learning, 30 Grad-CAM, 31 Fisher information network (FIN), 39 AI and Polygenic Risk Scores (PRS) algorithms, 40 DenseNet, 35 Explainability-partial, 34 Explainability-full, 34 VGG-16, 37 fine-tuned MobileNet-V2 convolutional neural network, 33 OMIG explainability 32 and BI-RADS-Net-V2 38 are used in 11 papers (78.57 %), SHAP 41 42 is used in 2 papers (14.3%) and LIME 36 is used in 1 paper (7.14%).…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation