The platform will undergo maintenance on Sep 14 at about 7:45 AM EST and will be unavailable for approximately 2 hours.
2021
DOI: 10.48550/arxiv.2103.13701
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

ECINN: Efficient Counterfactuals from Invertible Neural Networks

Frederik Hvilshøj,
Alexandros Iosifidis,
Ira Assent

Abstract: Counterfactual examples identify how inputs can be altered to change the predicted class of a classifier, thus opening up the black-box nature of, e.g., deep neural networks. We propose a method, ECINN, that utilizes the generative capacities of invertible neural networks for image classification to generate counterfactual examples efficiently. In contrast to competing methods that sometimes need a thousand evaluations or more of the classifier, ECINN has a closed-form expression and generates a counterfactual… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
2
1
1

Relationship

1
3

Authors

Journals

citations
Cited by 4 publications
(3 citation statements)
references
References 17 publications
0
3
0
Order By: Relevance
“…For counterfactual observation generation, numerous methods have been proposed [176], [177], [178]. While these generally need to query an underlying model multiple times, efficient methods utilizing invertible neural networks have also been proposed [179]. A related problem concerns the quantitative evaluation of counterfactual examples; see the work by Hvilshøj et al [180] for an in-depth discussion.…”
Section: Interpretable and Fair Machine Learningmentioning
confidence: 99%
“…For counterfactual observation generation, numerous methods have been proposed [176], [177], [178]. While these generally need to query an underlying model multiple times, efficient methods utilizing invertible neural networks have also been proposed [179]. A related problem concerns the quantitative evaluation of counterfactual examples; see the work by Hvilshøj et al [180] for an in-depth discussion.…”
Section: Interpretable and Fair Machine Learningmentioning
confidence: 99%
“…Interactive explainability. It could be fruitful to base explainable verification methods on counterfactual explanation techniques for AI [67]. Counterfactuals provide an understanding into AI models by identifying similar inputs with changes in decisive properties that lead to a different model outcome than the one under study.…”
Section: Documentation Of Verificationmentioning
confidence: 99%
“…For counterfactual observation generation, numerous methods have been proposed [1,19,38]. While these generally need to query an underlying model multiple times, efficient methods utilizing invertible neural networks have also been proposed [52]. A related problem concerns the quantitative evaluation of counterfactual examples; see the work by Hvilshøj et al [53] for an in-depth discussion.…”
Section: Interpretable and Fair Machine Learningmentioning
confidence: 99%