2021
DOI: 10.48550/arxiv.2111.02398
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Transparency of Deep Neural Networks for Medical Image Analysis: A Review of Interpretability Methods

Abstract: Artificial Intelligence (AI) has emerged as a useful aid in numerous clinical applications for diagnosis and treatment decisions. Deep neural networks have shown same or better performance than clinicians in many tasks owing to the rapid increase in the available data and computational power. In order to conform to the principles of trustworthy AI, it is essential that the AI system be transparent, robust, fair and ensure accountability. Current deep neural solutions are referred to as black-boxes due to a lac… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(2 citation statements)
references
References 129 publications
(183 reference statements)
0
2
0
Order By: Relevance
“…Most attribution methods [17], [28] have been proposed as posthoc procedure to visualize which pixels contribute positively or negatively to the network decision [20], [21]. In few works, they have also been used to regularize the training of a classification network.…”
Section: A Interpretable Classificationmentioning
confidence: 99%
See 1 more Smart Citation
“…Most attribution methods [17], [28] have been proposed as posthoc procedure to visualize which pixels contribute positively or negatively to the network decision [20], [21]. In few works, they have also been used to regularize the training of a classification network.…”
Section: A Interpretable Classificationmentioning
confidence: 99%
“…From a trained network, these methods compute a heat map that indicates the positive or negative contribution of each voxel of the input image in the network decision. These methods are mostly used at inference to verify the interpretability of a trained network and check that these maps match high-level knowledge as in [20], [21]. For example, in a medical context, they can be used to check that the decision of the network matches anatomical abnormalities present in the image.…”
Section: Introductionmentioning
confidence: 99%