2019
DOI: 10.48550/arxiv.1907.09294
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

The Dangers of Post-hoc Interpretability: Unjustified Counterfactual Explanations

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
27
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
4
1

Relationship

0
9

Authors

Journals

citations
Cited by 21 publications
(29 citation statements)
references
References 0 publications
0
27
0
Order By: Relevance
“…The advantages of our algorithm are twofold: (1) it can guarantee sample acceptance in high dimensional space where the rejection sampling based on the Monte Calro method easily fails when the region area is unknown; (2) it can handle sampling strategy in a perspective of the model where the commonly used -based sampling (Erhan, Courville, and Bengio 2010) is not precise to obtain samples considering complex non-spherical generative boundaries ( Laugel et al 2019). We experimentally verify that our algorithm obtains more consistent samples compared to -based sampling methods on deep convolutional GANs (DCGAN) (Radford, Metz, and Chintala 2015) and progressive growing of GANs (PGGAN) (Karras, Aila, and Laine 2015).…”
Section: Introductionmentioning
confidence: 99%
“…The advantages of our algorithm are twofold: (1) it can guarantee sample acceptance in high dimensional space where the rejection sampling based on the Monte Calro method easily fails when the region area is unknown; (2) it can handle sampling strategy in a perspective of the model where the commonly used -based sampling (Erhan, Courville, and Bengio 2010) is not precise to obtain samples considering complex non-spherical generative boundaries ( Laugel et al 2019). We experimentally verify that our algorithm obtains more consistent samples compared to -based sampling methods on deep convolutional GANs (DCGAN) (Radford, Metz, and Chintala 2015) and progressive growing of GANs (PGGAN) (Karras, Aila, and Laine 2015).…”
Section: Introductionmentioning
confidence: 99%
“…A similar desideratum is considered in [48] and [52]; the latter work employing neural autoencoders to that end. Laugel et al [35,36] require that z can always be reached from a training point x ′ without having to cross the decision boundary of 𝑓 , for z not to be the result of an artifact in the decision boundary of 𝑓 . In [26] and [41], counterfactual explanations are studied through the lens of causality.…”
Section: Related Workmentioning
confidence: 99%
“…Lastly, Recidivism (Rec) is a data set collected by an investigation of ProPublica about possible racial bias in the commercial software COMPAS, which intends to estimate the risk that an inmate will re-offend [33]. Examples of recent works in fair and explainable machine learning that adopted (some of) these data sets (each) are [8,9,17,28,32,35,54].…”
Section: Data Setsmentioning
confidence: 99%
“…Justified Counterfactual Explanations In 2019, Laungel et al [27], for classification problems, discussed the risk of making counterfactual explanations based on artifacts learned by the model, instead of actual knowledge. This problem arises for instances that lie in a subspace where there is no reliable information.…”
Section: B Contrastive Approachmentioning
confidence: 99%