2021
DOI: 10.1007/978-3-030-68796-0_20
|View full text |Cite
|
Sign up to set email alerts
|

Recursive Division of Image for Explanation of Shallow CNN Models

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(4 citation statements)
references
References 16 publications
0
4
0
Order By: Relevance
“…However, the problem with this random perturbation is that there will be many additional regions that are also significantly represented. Gorokhovatskyi et al [22] propose a method to explain the hidden parts of CNN, which segments images into w × h square areas and the square areas are occluded by turning into white or black, different areas are occluded sequentially to determine the region in which themodel is interested. Sililarly, the square area division ignores the contour information of elements in the image and does note hava advantage on visual interpretation.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…However, the problem with this random perturbation is that there will be many additional regions that are also significantly represented. Gorokhovatskyi et al [22] propose a method to explain the hidden parts of CNN, which segments images into w × h square areas and the square areas are occluded by turning into white or black, different areas are occluded sequentially to determine the region in which themodel is interested. Sililarly, the square area division ignores the contour information of elements in the image and does note hava advantage on visual interpretation.…”
Section: Related Workmentioning
confidence: 99%
“…For example, M. D. Zeiler et al [14] use a gray pixel block to occlude a certain part of the image; D. SEO et al [20] process the image by eliminating a portion of super-pixels. O. Gorokhovatskyi et al [22] use black or white square to achieve perturbation. Even though the above methods can help to obtain new required image, they cannot finalize the goal of observing a certain feature in supporting the classification results.…”
Section: Perturbationmentioning
confidence: 99%
“…The explanations are maps of the input images assigning different colours to the various types of labels. Similarly, a recursive division method proposed by [184] hides rectangular parts of the input images of varying size to analyse their influence on the predictions of the underlying neural network.…”
Section: Visual Explanations-miscellaneousmentioning
confidence: 99%
“…[39][40][41][42][43] Without such model explainability, deep learning algorithms remain a "black box" in implementation. The massive data computations in deep neural networks are beyond human logical and symbolic abilities for causality, 44 which raises technical issues of deep learning model development for medical imaging applications. These issues include, but are not limited to, the utilization of model input ("Do we need this as a part of the model?…”
Section: Introductionmentioning
confidence: 99%