2022
DOI: 10.1007/s10044-021-01055-y
|View full text |Cite
|
Sign up to set email alerts
|

Explainable image classification with evidence counterfactual

Abstract: The complexity of state-of-the-art modeling techniques for image classification impedes the ability to explain model predictions in an interpretable way. A counterfactual explanation highlights the parts of an image which, when removed, would change the predicted class. Both legal scholars and data scientists are increasingly turning to counterfactual explanations as these provide a high degree of human interpretability, reveal what minimal information needs to be changed in order to come to a different predic… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
19
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
3
2
1
1

Relationship

1
6

Authors

Journals

citations
Cited by 46 publications
(19 citation statements)
references
References 35 publications
0
19
0
Order By: Relevance
“…Explainable AI has been an important topic in recommender systems [5,6,13,36,41,46,47], natural language processing [8,16,20] and computer vision [7,10,15,25,38]. To improve the transparency of deep neural networks, many explanation techniques have been proposed in recent years.…”
Section: Related Work 21 Explainability In Deep Learning and Aimentioning
confidence: 99%
“…Explainable AI has been an important topic in recommender systems [5,6,13,36,41,46,47], natural language processing [8,16,20] and computer vision [7,10,15,25,38]. To improve the transparency of deep neural networks, many explanation techniques have been proposed in recent years.…”
Section: Related Work 21 Explainability In Deep Learning and Aimentioning
confidence: 99%
“…However, this work was based solely on textual data, in contrast to our approach which focuses on image data classification. Vermeire et al (2020) [31] extended the idea of SEDC to the visual domain to generate counterfactual explanations for image classification tasks. Recent work in this area has focused on generating realistic counterfactual explanations using Generative Adversarial Nets (GANs) [32], [33].…”
Section: Related Workmentioning
confidence: 99%
“…To find out which document features were the most important to the models, we used an approach similar to [31] to generate counterfactual explanations. However, instead of a searchbased perturbation, we used the feature importance map generated by DeepSHAP [15] as the basis for the feature perturbation to generate the counterfactual explanations.…”
Section: E Counterfactual Explanations Using Feature Importancementioning
confidence: 99%
See 2 more Smart Citations