2024
DOI: 10.1109/tai.2023.3266418
|View full text |Cite
|
Sign up to set email alerts
|

A Review on Explainable Artificial Intelligence for Healthcare: Why, How, and When?

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
4
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
7
2
1

Relationship

0
10

Authors

Journals

citations
Cited by 35 publications
(12 citation statements)
references
References 166 publications
0
4
0
Order By: Relevance
“…To create the method, the inception TL model was changed. In 2023, Bharati et al ( 30 ) addressed the XAI methods in healthcare. The study examined the current XAI trends and outlined the main moving directions of the field.…”
Section: Related Workmentioning
confidence: 99%
“…To create the method, the inception TL model was changed. In 2023, Bharati et al ( 30 ) addressed the XAI methods in healthcare. The study examined the current XAI trends and outlined the main moving directions of the field.…”
Section: Related Workmentioning
confidence: 99%
“…For this reason, AI research is increasing its interests in trustworthy AI [ 1 ], a broad paradigm establishing how to properly design, develop and deploy real-world AI applications. Between its principles, transparency requires providing the user with an understanding of the autonomous decisions generated by the model: this topic is subject of eXplainable AI (XAI) research [ 2 , 3 ]. XAI comprehends a wide range of methodologies, which can be broadly categorized as post-hoc explanations of black box models and transparent-by-design techniques [ 4 ].…”
Section: Introductionmentioning
confidence: 99%
“…To overcome these limitations, first statistical models such as Vector Autoregression (VAR) and finally Deep Learning (DL) based models enhanced predictivity power with non-linear formulations, also taking into account multivariate data [7]. However, these models are considered opaque in terms of explainability [8]. The development of post-hoc explainability tools to overcome such limitations, such as Shap values [9], partially solved the issues but still rely on the expertise of the developer and are not as well-defined for time series models.…”
Section: Introductionmentioning
confidence: 99%