2020
DOI: 10.1609/aaai.v34i10.7244
|View full text |Cite
|
Sign up to set email alerts
|

Towards Interpretable Semantic Segmentation via Gradient-Weighted Class Activation Mapping (Student Abstract)

Abstract: Convolutional neural networks have become state-of-the-art in a wide range of image recognition tasks. The interpretation of their predictions, however, is an active area of research. Whereas various interpretation methods have been suggested for image classification, the interpretation of image segmentation still remains largely unexplored. To that end, we propose seg-grad-cam, a gradient-based method for interpreting semantic segmentation. Our method is an extension of the widely-used Grad-CAM method, applie… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

1
44
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
2
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 84 publications
(45 citation statements)
references
References 6 publications
1
44
0
Order By: Relevance
“…We generally find the convolutional layers near the end of U-Net's contracting path captures more comprehensible features while the convolutional layers in the expansive path successively combine features from the contracting path and produce heatmaps that look more and more similar to the logits of the selected class. This Grad-CAM finding is supported by similar results of U-Net semantic segmentation on Cityscapes datasets [57]. In Fig.…”
Section: ) Resnet-18supporting
confidence: 78%
“…We generally find the convolutional layers near the end of U-Net's contracting path captures more comprehensible features while the convolutional layers in the expansive path successively combine features from the contracting path and produce heatmaps that look more and more similar to the logits of the selected class. This Grad-CAM finding is supported by similar results of U-Net semantic segmentation on Cityscapes datasets [57]. In Fig.…”
Section: ) Resnet-18supporting
confidence: 78%
“…For data interpretability, a semantic segmentation extension of the gradient class activation mapping (GradCAM++) strategy was applied [43], which aims to visually test the deep learning model's ability to learn relevant features separating between nerve and background. Specifically, a Grad-CAM++ extension of the seminal work in [44] for semantic segmentation was proposed to capture the entire object completeness. Then, an explanation map-based quantitative assessment was carried out for relevance analysis.…”
Section: Introductionmentioning
confidence: 99%
“…There are only a few previous works that focus on segmentation interpretability [6,7]. These works solve the problem in a lower resolution setup, and then up-sample to get pixel level explanation.…”
Section: Introductionmentioning
confidence: 99%
“…These works solve the problem in a lower resolution setup, and then up-sample to get pixel level explanation. They also only return explanations with respect to a single pixel [6] or a part of the image [7]. Our method on the other hand, seamlessly returns the relative importance of each pixel, with respect to the entire input.…”
Section: Introductionmentioning
confidence: 99%