2020
DOI: 10.1609/aaai.v34i04.6064
|View full text |Cite
|
Sign up to set email alerts
|

Sanity Checks for Saliency Metrics

Abstract: Saliency maps are a popular approach to creating post-hoc explanations of image classifier outputs. These methods produce estimates of the relevance of each pixel to the classification output score, which can be displayed as a saliency map that highlights important pixels. Despite a proliferation of such methods, little effort has been made to quantify how good these saliency maps are at capturing the true relevance of the pixels to the classifier output (i.e. their “fidelity”). We therefore investigate existi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

4
68
2

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
4
1

Relationship

1
9

Authors

Journals

citations
Cited by 90 publications
(81 citation statements)
references
References 13 publications
(27 reference statements)
4
68
2
Order By: Relevance
“…In principle this could be used to improve the interactive feedback loop proposed for the ( P2 ) scenario. Another direction for future work is to complement the metrics of LEAF framework with a set of sanity checks based on measures of statistical reliability, as suggested in Tomsett et al (2020) for saliency metrics.…”
Section: Discussionmentioning
confidence: 99%
“…In principle this could be used to improve the interactive feedback loop proposed for the ( P2 ) scenario. Another direction for future work is to complement the metrics of LEAF framework with a set of sanity checks based on measures of statistical reliability, as suggested in Tomsett et al (2020) for saliency metrics.…”
Section: Discussionmentioning
confidence: 99%
“…OPEN ACCESS validate, 41 with some failing basic sanity checks. 42 This would preclude the use of neural network models for high-stakes decision support.…”
Section: Llmentioning
confidence: 99%
“…Since the proper interpretation method is to rank the features most sensitive to the model's decision, it seems natural to consider the Spearman rank correlation [58] to compare the similarity between explanations. Prior work has provided theoretical and experimental arguments in line with this choice [50,49,59,46].…”
Section: Efficiencymentioning
confidence: 99%