2022
DOI: 10.3389/fgene.2022.822666
|View full text |Cite
|
Sign up to set email alerts
|

Explanation-Driven Deep Learning Model for Prediction of Brain Tumour Status Using MRI Image Data

Abstract: Cancer research has seen explosive development exploring deep learning (DL) techniques for analysing magnetic resonance imaging (MRI) images for predicting brain tumours. We have observed a substantial gap in explanation, interpretability, and high accuracy for DL models. Consequently, we propose an explanation-driven DL model by utilising a convolutional neural network (CNN), local interpretable model-agnostic explanation (LIME), and Shapley additive explanation (SHAP) for the prediction of discrete subtypes … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
27
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
5
4

Relationship

2
7

Authors

Journals

citations
Cited by 65 publications
(28 citation statements)
references
References 30 publications
1
27
0
Order By: Relevance
“…Even though CNNs have demonstrated remarkable performance in brain tumor classification tasks in the majority of the reviewed studies, their level of trustworthiness and transparency must be evaluated in a clinic context. Of the included articles, only two studies, conducted by Artzi et al [ 122 ] and Gaur et al [ 127 ], investigated the Black-Box nature of CNN models for brain tumor classification to ensure that the model is looking in the correct place rather than at noise or unrelated artifacts.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…Even though CNNs have demonstrated remarkable performance in brain tumor classification tasks in the majority of the reviewed studies, their level of trustworthiness and transparency must be evaluated in a clinic context. Of the included articles, only two studies, conducted by Artzi et al [ 122 ] and Gaur et al [ 127 ], investigated the Black-Box nature of CNN models for brain tumor classification to ensure that the model is looking in the correct place rather than at noise or unrelated artifacts.…”
Section: Resultsmentioning
confidence: 99%
“…Gaur et al [ 127 ] proposed a CNN-based model integrated with local interpretable model-agnostic explanation (LIME) and Shapley additive explanation (SHAP) for the classification and explanation of meningioma, glioma, pituitary, and normal images using an MRI dataset of 2870 MR images. For better classification results, Gaussian noise was introduced in the pre-processing step to improve the learning for the CNN, with mean = 0 and a standard deviation of 10 0.5 .…”
Section: Resultsmentioning
confidence: 99%
“…XAI can also be used to describe how a given imaging voxel contributes to a model’s output. Gaur et al used a deep learning model to identify brain tumor subtypes, with accuracy of 94.64% ( 32 ). To explain how their model made its predictions, they provide examples using the SHAP framework where SHAP values are superimposed on imaging voxels, and in doing so illustrate graphically and intuitively how example images are classified as normal or meningioma, as illustrated in Figure 3 .…”
Section: Utilization Of Xai In Oncology Researchmentioning
confidence: 99%
“…DL applications are successfully implemented in the classification of medical images and clinical decision-making tasks Greenspan et al (2016) . The use of machine learning (ML) algorithms and software, or artificial intelligence (AI) Gaur et al (2022) ; Biswas et al (2021) , to replicate human cognition in the analysis, display, and comprehension of complicated medical and health-care data is referred to as AI in healthcare Biswas Milon and Kawsher. (2022) .…”
Section: Introductionmentioning
confidence: 99%