2023
DOI: 10.1038/s41746-023-00837-4
|View full text |Cite
|
Sign up to set email alerts
|

Solving the explainable AI conundrum by bridging clinicians’ needs and developers’ goals

Abstract: Explainable artificial intelligence (XAI) has emerged as a promising solution for addressing the implementation challenges of AI/ML in healthcare. However, little is known about how developers and clinicians interpret XAI and what conflicting goals and requirements they may have. This paper presents the findings of a longitudinal multi-method study involving 112 developers and clinicians co-designing an XAI solution for a clinical decision support system. Our study identifies three key differences between deve… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
7
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 20 publications
(6 citation statements)
references
References 32 publications
0
3
0
Order By: Relevance
“…35 Such a framework will determine when adoption should proceed or be revoked if the model proves valueless, is not implementable, does not operate across sites, fails in prospective evaluations or leads to potentially unsafe over-reliance. Clinicians will also demand a regulatory framework that determines, under software as a medical device legislation, when liability for errors and resultant patient harm from tool Box 3 Limitations of attempts to render artificial intelligence (AI) models and tools fully explainable [28][29][30][31] ⇒ There is a lack of agreement on the different levels of explainability, no clear guidance on how to choose among different explainability methods and an absence of standardised methods for evaluating explainability. 28 ⇒ The value to clinicians of any explanation will vary according to the specific model and its task (or use case) and the expertise (ie, level of AI or domain knowledge), preferences for accuracy relative to explainability and other contextual values of the clinician user.…”
Section: The Tool Must Align With Clinical Workflowsmentioning
confidence: 99%
See 1 more Smart Citation
“…35 Such a framework will determine when adoption should proceed or be revoked if the model proves valueless, is not implementable, does not operate across sites, fails in prospective evaluations or leads to potentially unsafe over-reliance. Clinicians will also demand a regulatory framework that determines, under software as a medical device legislation, when liability for errors and resultant patient harm from tool Box 3 Limitations of attempts to render artificial intelligence (AI) models and tools fully explainable [28][29][30][31] ⇒ There is a lack of agreement on the different levels of explainability, no clear guidance on how to choose among different explainability methods and an absence of standardised methods for evaluating explainability. 28 ⇒ The value to clinicians of any explanation will vary according to the specific model and its task (or use case) and the expertise (ie, level of AI or domain knowledge), preferences for accuracy relative to explainability and other contextual values of the clinician user.…”
Section: The Tool Must Align With Clinical Workflowsmentioning
confidence: 99%
“…28 ⇒ The value to clinicians of any explanation will vary according to the specific model and its task (or use case) and the expertise (ie, level of AI or domain knowledge), preferences for accuracy relative to explainability and other contextual values of the clinician user. 29 ⇒ The more complex the model, especially deep learning models, the less explainable it becomes and hence expecting clinicians (and patients) to master the technical and statistical intricacies of most models is unrealistic. ⇒ Explainability methods commonly used to identify model input features strongly influencing its predictions,* while useful in making input-output relationships clearer, are imperfect post hoc approximations of model functions rather than precise explanations of the inner workings of the model.…”
Section: The Tool Must Align With Clinical Workflowsmentioning
confidence: 99%
“…Previously, an explainable AI model was developed to predict the deterioration of patients with subarachnoid hemorrhage in the intensive care unit. To enhance the implementation of the AI tool, the perception gap between the developers and clinicians was investigated [ 50 ]. Through interviews, the study found that the developers believed that clinicians must be able to understand model operation and developed the AI model with explainability by providing SHAP values, as mentioned above.…”
Section: Challenges In Implementation Into Clinical Settingsmentioning
confidence: 99%
“…A critical aspect of this integration is where the AI fits in the clinical workflow and the outputs generated to support this workflow. Along with conveying the core prediction of the AI model, these outputs may facilitate explainability in helping the clinician understand how the model arrived at the prediction -a commonly emphasized component for enhancing trust and decision making [1][2][3][4] . While many workflow strategies and explainability techniques have been proposed for AI in medical imaging 5,6 , the current scope in clinically-available AI products is not well understood.…”
Section: Mainmentioning
confidence: 99%