2023
DOI: 10.1007/978-3-031-26293-7_14
|View full text |Cite
|
Sign up to set email alerts
|

Diffusion Models for Counterfactual Explanations

Abstract: This paper addresses the challenge of generating Counterfactual Explanations (CEs), involving the identification and modification of the fewest necessary features to alter a classifier's prediction for a given image. Our proposed method, Text-to-Image Models for Counterfactual Explanations (TIME), is a black-box counterfactual technique based on distillation. Unlike previous methods, this approach requires solely the image and its prediction, omitting the need for the classifier's structure, parameters, or gra… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(1 citation statement)
references
References 61 publications
(63 reference statements)
0
1
0
Order By: Relevance
“…To overcome the above shortcomings, deep generative models, such as Generative Adversarial Networks (GANs) [11], Variational Auto Encoders (VAEs) [12] and Diffusion Models [13] have emerged in the literature as promising solutions. Generative models can learn to map the distribution of (high-dimensional) input images between different domains.…”
Section: Introductionmentioning
confidence: 99%
“…To overcome the above shortcomings, deep generative models, such as Generative Adversarial Networks (GANs) [11], Variational Auto Encoders (VAEs) [12] and Diffusion Models [13] have emerged in the literature as promising solutions. Generative models can learn to map the distribution of (high-dimensional) input images between different domains.…”
Section: Introductionmentioning
confidence: 99%