2021
DOI: 10.1007/978-3-030-83620-7_7
|View full text |Cite
|
Sign up to set email alerts
|

Explainable AI, But Explainable to Whom? An Exploratory Case Study of xAI in Healthcare

Abstract: Advances in AI technologies have resulted in superior levels of AI-based model performance. However, this has also led to a greater degree of model complexity, resulting in "black box" models. In response to the AI black box problem, the field of explainable AI (xAI) has emerged with the aim of providing explanations catered to human understanding, trust, and transparency. Yet, we still have a limited understanding of how xAI addresses the need for explainable AI in the context of healthcare. Our research expl… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
10
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
4
1

Relationship

0
9

Authors

Journals

citations
Cited by 25 publications
(14 citation statements)
references
References 63 publications
0
10
0
Order By: Relevance
“…While explainability is needed for the development of clinical decision support systems [ 11 , 37 39 ], a number of researchers have indicated that current explainability approaches are insufficient for use in a clinical setting [ 38 , 61 ]. There are valid concerns associated with their critiques.…”
Section: Recommendations For Integration Of Confidence Estimation App...mentioning
confidence: 99%
See 1 more Smart Citation
“…While explainability is needed for the development of clinical decision support systems [ 11 , 37 39 ], a number of researchers have indicated that current explainability approaches are insufficient for use in a clinical setting [ 38 , 61 ]. There are valid concerns associated with their critiques.…”
Section: Recommendations For Integration Of Confidence Estimation App...mentioning
confidence: 99%
“…However, in and of themselves, they are insufficient to the task. It is also critical that automated neuroimaging-based clinical decision support systems be explainable [ 11 , 37 39 ]. If clinicians are to use clinical decision support systems, they are ethically obligated to be able to explain the recommendations of such systems to their patients [ 11 ].…”
Section: Introductionmentioning
confidence: 99%
“…If neuroimaging clinical decision support systems (CDSS) are ever to be implemented in a clinical setting, they must be both robust and reliable [1]. One aspect of this reliability is that clinicians and need to not only know whether there are systematic differences in how the model will perform for different patients [2].…”
Section: Introductionmentioning
confidence: 99%
“…Besides the use of AI to enhance experiences of both clinicians and patients [33] in healthcare. Furthermore, the growth of AI explainable learning has made the AI understandable, providing explanations catered to humans, promoting transparency about the decisions and consequently reliability, [37] , [45] . Once the healthcare professionals and patients do not accept decisions without understanding and trusting the explanations, or at least how the decisions are made [46] .…”
Section: Discussion and Future Directionsmentioning
confidence: 99%