2022
DOI: 10.1002/ail2.77
|View full text |Cite
|
Sign up to set email alerts
|

Evaluating perceptual and semantic interpretability of saliency methods: A case study of melanoma

Abstract: In order to be useful, XAI explanations have to be faithful to the AI system they seek to elucidate and also interpretable to the people that engage with them. There exist multiple algorithmic methods for assessing faithfulness, but this is not so for interpretability, which is typically only assessed through expensive user studies. Here we propose two complementary metrics to algorithmically evaluate the interpretability of saliency map explanations. One metric assesses perceptual interpretability by quantify… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
1
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(2 citation statements)
references
References 38 publications
0
1
0
Order By: Relevance
“…Comparable studies utilized activation maps or tiles with top scores to interpret the model inference. However, the trustworthiness of activation maps could be more questionable as deep neural network classifiers have an opportunistic nature, and the existing saliency methods inherit a high risk for misinterpretation, limited reproducibility, and sparse visualization [47,48]. Moreover, it should be considered that tiles with top scores ignore the variance in histology patterns between two categories after thresholding predictions, as evident by our data on the correlation between histology patterns and prediction deciles.…”
Section: Discussionmentioning
confidence: 98%
“…Comparable studies utilized activation maps or tiles with top scores to interpret the model inference. However, the trustworthiness of activation maps could be more questionable as deep neural network classifiers have an opportunistic nature, and the existing saliency methods inherit a high risk for misinterpretation, limited reproducibility, and sparse visualization [47,48]. Moreover, it should be considered that tiles with top scores ignore the variance in histology patterns between two categories after thresholding predictions, as evident by our data on the correlation between histology patterns and prediction deciles.…”
Section: Discussionmentioning
confidence: 98%
“…Comparable studies utilized activation maps or tiles with top scores to interpret the model inference. However, the trustworthiness of activation maps could be more questionable as deep neural network classifiers have an opportunistic nature and the existing saliency methods inherit a high risk for misinterpretation, limited reproducibility, and sparse visualization 43,44 . Moreover, considering tiles with top scores ignores the variance in histology patterns between two categories after thresholding predictions, as evident by our data on the correlation between histology patterns and prediction deciles.…”
Section: Discussionmentioning
confidence: 99%