2022
DOI: 10.21203/rs.3.rs-2326665/v1
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Solving the Explainable AI Conundrum: How to Bridge the Gap Between Clinicians Needs and Developers Goals

Abstract: Explainable AI (XAI) is considered the number one solution for overcoming implementation hurdles of AI/ML in clinical practice. However, it is still unclear how clinicians and developers interpret XAI (differently) and whether building such systems is achievable or even desirable. This longitudinal multi-method study queries (n=112) clinicians and developers as they co-developed the DCIP – an ML-based prediction system for Delayed Cerebral Ischemia. The resulting framework reveals that ambidexterity between ex… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
4
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
1
1

Relationship

1
5

Authors

Journals

citations
Cited by 8 publications
(8 citation statements)
references
References 18 publications
0
4
0
Order By: Relevance
“…Involving clinicians in the co-design of interpretable rather than fully transparent systems could thus be a solution to solving the explainable AI conundrum as a recent study has shown [44]. Given the high safety risks for patients, these considerations are particularly important for the tasks of "diagnostic decision-making", "prescribing medication or treatment", and "analyzing medical data".…”
Section: Discussionmentioning
confidence: 99%
“…Involving clinicians in the co-design of interpretable rather than fully transparent systems could thus be a solution to solving the explainable AI conundrum as a recent study has shown [44]. Given the high safety risks for patients, these considerations are particularly important for the tasks of "diagnostic decision-making", "prescribing medication or treatment", and "analyzing medical data".…”
Section: Discussionmentioning
confidence: 99%
“…Finally, we observed a strong preference among medical students for AI systems that are explainable rather than highly accurate. This mirrors the growing emphasis on 'Explainable AI' in the medical field and underscores the urgent need for developing AI algorithms that transparently disclose their decisions and limitations, especially when applied to the medical domain, to promote trust and acceptance among healthcare students, professionals, and patients [51][52][53][54].…”
Section: Discussionmentioning
confidence: 99%
“…This study might produce explanations that continue to be correct, pertinent, and in line with how drug development is evolving [87]. The insights provided to researchers and clinicians can maintain their interpretability and reliability by using adaptable XAI frameworks that automatically update explanations in response to model updates or changes in data distribution [88]. This will enable informed decision-making even in the face of changing complexity.…”
Section: B Future Research Directions 1) Dynamic Explanation Adaptationmentioning
confidence: 95%