2024
DOI: 10.1038/s41598-024-51867-1
|View full text |Cite
|
Sign up to set email alerts
|

NeuroNet19: an explainable deep neural network model for the classification of brain tumors using magnetic resonance imaging data

Rezuana Haque,
Md. Mehedi Hassan,
Anupam Kumar Bairagi
et al.

Abstract: Brain tumors (BTs) are one of the deadliest diseases that can significantly shorten a person’s life. In recent years, deep learning has become increasingly popular for detecting and classifying BTs. In this paper, we propose a deep neural network architecture called NeuroNet19. It utilizes VGG19 as its backbone and incorporates a novel module named the Inverted Pyramid Pooling Module (iPPM). The iPPM captures multi-scale feature maps, ensuring the extraction of both local and global image contexts. This enhanc… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(3 citation statements)
references
References 24 publications
0
3
0
Order By: Relevance
“…Deep learning models are seldom analyzed to understand which features are most important, making it challenging to understand the underlying decision-making process. Interpretability is crucial in the medical domain, where clinicians need to justify and explain the reasoning behind diagnostic results [76]. Addressing this limitation and improving the interpretability of our models are areas of ongoing research.…”
Section: Limitations Of Our Modelsmentioning
confidence: 99%
See 1 more Smart Citation
“…Deep learning models are seldom analyzed to understand which features are most important, making it challenging to understand the underlying decision-making process. Interpretability is crucial in the medical domain, where clinicians need to justify and explain the reasoning behind diagnostic results [76]. Addressing this limitation and improving the interpretability of our models are areas of ongoing research.…”
Section: Limitations Of Our Modelsmentioning
confidence: 99%
“…Developing techniques to enhance the explainability of deep learning models will increase their acceptance and trust in clinical practice. Interrogating the explainability of models can allow clinicians to understand the steps and features that most greatly influence classification tasks [76]. Additionally, exploring transfer learning and domain adaptation methods can improve the generalizability of models, enabling them to perform well with new clinical data.…”
Section: Future Direction Of Brain Tumor Classificationmentioning
confidence: 99%
“…Another framework for brain tumor diagnosis, NeuroNet19, combines a 19-layer VGG19 that detects complex hierarchical features in images with an inverted pyramid pooling module (iPPM) model, which refines these features, leveraging post-interpretability [100]. Methodology-based techniques are categorized into Backpropagation-based and Perturbation-based [67], among which Backpropagation-based GradCAM was proposed to describe CNN models with good performance [101].…”
Section: Xai-based Cdssmentioning
confidence: 99%