2021
DOI: 10.1016/j.compbiomed.2021.104410
|View full text |Cite
|
Sign up to set email alerts
|

Visual interpretability in 3D brain tumor segmentation network

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
28
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 42 publications
(28 citation statements)
references
References 26 publications
0
28
0
Order By: Relevance
“…The development of autonomous brain tumor segmentation and classification models using MRI images is still a challenging task. The challenges are due to several constraints including the effect of different types of noises embedded in the brain MRI images [ 116 , 117 , 118 ], motion and metal artifacts during image acquisition [ 164 ], low-resolution MRI images [ 165 ], and lack of deep learning models interpretability and transparency [ 166 , 167 ].…”
Section: Discussionmentioning
confidence: 99%
“…The development of autonomous brain tumor segmentation and classification models using MRI images is still a challenging task. The challenges are due to several constraints including the effect of different types of noises embedded in the brain MRI images [ 116 , 117 , 118 ], motion and metal artifacts during image acquisition [ 164 ], low-resolution MRI images [ 165 ], and lack of deep learning models interpretability and transparency [ 166 , 167 ].…”
Section: Discussionmentioning
confidence: 99%
“…The authors examined their method on the performance of a U-Net model in feature extraction of Cityscapes dataset and showed that the initial convolutional layers exhibit low-level edge-like feature extractions. Grad-CAM based approaches were also used to visualize 2-dimensional [6,26] and 3-dimensional brain tumor segmentation [27].…”
Section: Discussionmentioning
confidence: 99%
“…Already available in the Keras method libraries, Grad-CAM methods have demonstrated great capabilities in image region discrimination in various clinical and computer vision studies [25][26][27][28]. Despite the utility of Grad-CAM, there are some limitations associated with gradient-based explanation and Grad-CAM estimations, especially when targeting multiple objects in an image.…”
Section: Discussionmentioning
confidence: 99%
“…A measure related to completeness was defined in [43] and aimed to capture the proportion of training images represented by the learned visual concepts, in addition to two other metrics: the inter-and intraclass diversity and the faithfulness of explanations computed by perturbing relevant patches and measuring the drop in classification confidence. Other articles followed a similar approach to validate relevant pixels or features identified with a transparent method; for example, in [83] a deletion curve was constructed by plotting the dice score vs. the percentage of pixels removed and [1] defined a recall rate when the model proposes certain number of informative channels. [111] proposed to evaluate the consistency of visualization results and the outputs of a CNN by computing the L1 error between predicted class scores and explanation heatmaps.…”
Section: R: Reportingmentioning
confidence: 99%
“…Segmentation was another major application field (n=9). Research about transparency mainly focused on segmentation problems for brain and cardiac MRIs [33,39,42,83,73,75,91]. Other segmentation problems included mass segmentation in mammograms [86], cardiac segmentation in ultrasound [73], liver tumor segmentation in hepatic CT images, and skin lesion segmentation in dermatological images [34].…”
Section: T: Taskmentioning
confidence: 99%