2021
DOI: 10.1109/tnnls.2020.3027314
|View full text |Cite
|
Sign up to set email alerts
|

A Survey on Explainable Artificial Intelligence (XAI): Toward Medical XAI

Abstract: Recently, artificial intelligence and machine learning in general have demonstrated remarkable performances in many tasks, from image processing to natural language processing, especially with the advent of deep learning (DL). Along with research progress, they have encroached upon many different fields and disciplines. Some of them require high level of accountability and thus transparency, for example, the medical sector. Explanations for machine decisions and predictions are thus needed to justify their rel… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
547
0
1

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 1,075 publications
(673 citation statements)
references
References 121 publications
2
547
0
1
Order By: Relevance
“…AI enabled medicines can suffer from a lack of interpretability – where either data or algorithm decisions cannot be readily understood. AI enabled medicines must be easily interpretable for widespread adoption (Vellido, 2019; Tonekaboni et al, 2019; Tjoa, E., & Guan, 2019). Visualizations are one solution for creating interpretable representations of both high-dimensional data and complex algorithm decisions.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…AI enabled medicines can suffer from a lack of interpretability – where either data or algorithm decisions cannot be readily understood. AI enabled medicines must be easily interpretable for widespread adoption (Vellido, 2019; Tonekaboni et al, 2019; Tjoa, E., & Guan, 2019). Visualizations are one solution for creating interpretable representations of both high-dimensional data and complex algorithm decisions.…”
Section: Resultsmentioning
confidence: 99%
“…Lastly, artificial intelligence (AI) is becoming a powerful tool in medicine. Importantly, AI enabled medicines must be easily interpretable for widespread adoption (Vellido, 2019; Tonekaboni et al, 2019; Tjoa, E., & Guan, 2019). AI interpretability can be enhanced using visualizations.…”
Section: Introductionmentioning
confidence: 99%
“…On the other hand, an attempt is made to increase the complexity of XAI models (Arrieta et al., 2019). However, biomedical studies that compare different existing models and provide meaningful conclusions about their interpretability contribute to this research area (Tjoa & Guan, 2019). The present report contributes to this by proposing a comparative scenario for the selection of a suitable XAI, since no prior selection can be recommended, considering that the XAIs already in a limited number of data sets did not show a consistent order of classification performance.…”
Section: Discussionmentioning
confidence: 99%
“…It is still not easy to explain exactly what a neural network learns during training for any particular classification task because of the complexity of the algorithm (the black box problem 29 ). All methods strive to reflect the patterns the model learned from the data; examples include synthetic images generated by general adversarial networks, t‐distributed stochastic neighbor embedding (t‐SNE) clustering of feature layers to find meaningful groups, and the identification of concepts of interest that can be labeled as a specific feature 30 . A caveat of the black‐box explainability approach is that it is using 1 model to explain a second, and the models can be unreliable and misleading.…”
Section: Domain‐specific Expertisementioning
confidence: 99%
“…One solution is to use of the aforementioned explainability tools and/or interpretable model approaches to help identify which patterns the model is learning 29‐31 . Another solution, recommended by Riley, is to ask the model to predict other things.…”
Section: Domain‐specific Expertisementioning
confidence: 99%