2022
DOI: 10.1371/journal.pdig.0000016
|View full text |Cite
|
Sign up to set email alerts
|

To explain or not to explain?—Artificial intelligence explainability in clinical decision support systems

Abstract: Explainability for artificial intelligence (AI) in medicine is a hotly debated topic. Our paper presents a review of the key arguments in favor and against explainability for AI-powered Clinical Decision Support System (CDSS) applied to a concrete use case, namely an AI-powered CDSS currently used in the emergency call setting to identify patients with life-threatening cardiac arrest. More specifically, we performed a normative analysis using socio-technical scenarios to provide a nuanced account of the role o… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
35
0
1

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 71 publications
(36 citation statements)
references
References 60 publications
0
35
0
1
Order By: Relevance
“…Understanding how a diagnosis and treatment plan is reached is fundamental to clinical and patient autonomy, important for continued learning, and for fostering trust in any algorithm. [64][65][66] Efforts were made to present simple decision tree logic for each diagnosis. Nevertheless, the optimal method of presentation of algorithm branches to assure understanding by primary care level healthcare workers should be further explored.…”
Section: Resultsmentioning
confidence: 99%
“…Understanding how a diagnosis and treatment plan is reached is fundamental to clinical and patient autonomy, important for continued learning, and for fostering trust in any algorithm. [64][65][66] Efforts were made to present simple decision tree logic for each diagnosis. Nevertheless, the optimal method of presentation of algorithm branches to assure understanding by primary care level healthcare workers should be further explored.…”
Section: Resultsmentioning
confidence: 99%
“…We assumed that each medical AI product has application boundaries [ 24 , 31 ] that should be reported and scored zero transparency and trustworthiness points if these were not disclosed. Similar to a previous study [ 35 ], it was challenging to judge if all the potential sources of bias, causes of harm, and caveats for deployment were sufficiently investigated. It was also challenging to judge whether bias mitigation steps are required or not and assign justified scores.…”
Section: Discussionmentioning
confidence: 99%
“…It was also challenging to judge whether bias mitigation steps are required or not and assign justified scores. Scoring answers on the performed validation steps (e.g., model uncertainty and feature importance) was challenging, because the methods for these validation steps have not yet been standardized and may require adaptation to individual use cases [ 32 , 35 ]. Other assessors may find it relevant to score questions on additional info on model development or validation.…”
Section: Discussionmentioning
confidence: 99%
“…For instance, these methods require a significant amount of annotated data to be used in training, which is particularly costly in digital pathology as it requires an expert pathologist to manually annotate large volumes of data ( 11 , 12 ). Also, applications developed with deep learning should consider explainability to improve confidence in their use ( 13 , 14 ).…”
Section: Introductionmentioning
confidence: 99%