2021
DOI: 10.3390/jpm11111213
|View full text |Cite
|
Sign up to set email alerts
|

Explainable Artificial Intelligence for Human-Machine Interaction in Brain Tumor Localization

Abstract: Primary malignancies in adult brains are globally fatal. Computer vision, especially recent developments in artificial intelligence (AI), have created opportunities to automatically characterize and diagnose tumor lesions in the brain. AI approaches have provided scores of unprecedented accuracy in different image analysis tasks, including differentiating tumor-containing brains from healthy brains. AI models, however, perform as a black box, concealing the rational interpretations that are an essential step t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
17
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 37 publications
(17 citation statements)
references
References 26 publications
0
17
0
Order By: Relevance
“…These methods can be used to generate visual explainability maps for deep learning models like 2D and 3D CNN, VGG [85], and Resnet-50 [86] (for classification) and 2D/3D U-Net (for segmentation). In [87], the high-level features of three deep convolutional neural networks (DenseNet-121, GoogLeNet, MobileNet) are analysed using the Grad-CAM explainability technique. The Grad-CAM outputs helped distinguish these three models' brain tumor lesion localization capabilities.…”
Section: Perspectivesmentioning
confidence: 99%
“…These methods can be used to generate visual explainability maps for deep learning models like 2D and 3D CNN, VGG [85], and Resnet-50 [86] (for classification) and 2D/3D U-Net (for segmentation). In [87], the high-level features of three deep convolutional neural networks (DenseNet-121, GoogLeNet, MobileNet) are analysed using the Grad-CAM explainability technique. The Grad-CAM outputs helped distinguish these three models' brain tumor lesion localization capabilities.…”
Section: Perspectivesmentioning
confidence: 99%
“…The purpose of this research (Esmaeili et al, 2021) is to examine how effective specific deep-learning approaches are at detecting tumor lesions and separating them from healthy regions in magnetic resonance imaging comparisons. Regardless of the fact that there is indeed a significant relationship between category and tumor localized accuracy (p = 0.005, R = 0.46), the recognized AI arrangements examined in this learning detect specific tumor brains based on irrelevant criteria.…”
Section: Literature Surveymentioning
confidence: 99%
“…For brain cancer classification, Windisch et al [ 28 ] applied 2D Grad-CAM to generate heatmaps indicating which areas of the input MRI made the classifier decide on the category of the existence of a brain tumor. Similarly, 2D Grad-CAM was used in [ 29 ] to evaluate the performance of three DL models in brain tumor classification. The key limitation of these studies is that experiments were concluded on 2D MRI slices without investigating the model on 3D medical applications.…”
Section: Related Workmentioning
confidence: 99%