2019
DOI: 10.48550/arxiv.1906.09293
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Generating Counterfactual and Contrastive Explanations using SHAP

Abstract: With the advent of GDPR, the domain of explainable AI and model interpretability has gained added impetus. Methods to extract and communicate visibility into decision-making models have become legal requirement. Two specific types of explanations, contrastive and counterfactual have been identified as suitable for human understanding. In this paper, we propose a model agnostic method and its systemic implementation to generate these explanations using shapely additive explanations (SHAP). We discuss a generati… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
24
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
6
3
1

Relationship

0
10

Authors

Journals

citations
Cited by 17 publications
(24 citation statements)
references
References 2 publications
0
24
0
Order By: Relevance
“…For instance, in an autonomous car it is not practical to go through all alternative angles a steering wheel could have been turned to observe alternative results. Deriving the foil from the context is part of the explanation facility's task, but apart from some attempts in IML [235,208,185,181], has not been widely discussed in the context of RL. While not the focus of their research some work in dynamic programming, such as Erwig et al [74], have found that the context for contrastive explanations could be anticipated by identifying principal and minor categories and using these to anticipate user questions through value decomposition.…”
Section: Results Of Xrl-behaviourmentioning
confidence: 99%
“…For instance, in an autonomous car it is not practical to go through all alternative angles a steering wheel could have been turned to observe alternative results. Deriving the foil from the context is part of the explanation facility's task, but apart from some attempts in IML [235,208,185,181], has not been widely discussed in the context of RL. While not the focus of their research some work in dynamic programming, such as Erwig et al [74], have found that the context for contrastive explanations could be anticipated by identifying principal and minor categories and using these to anticipate user questions through value decomposition.…”
Section: Results Of Xrl-behaviourmentioning
confidence: 99%
“…Therefore, datasets of various data types (both categorical, numerical and mixed variables), application domain, number of features, ratio of feature types, class balance and kinds of data constraints are selected (see Table 2). Several of these datasets are widely used in counterfactual explanation studies like Adult [53,29,64,74,21,76,10,45], Statlog -German Credit [53,21,59,52], Breast Cancer Wisconsin (BCW) [74,5,76,4,39] and Wine [60,42,4]. Note that the number of datasets included in the papers where these algorithms have been proposed range from 1 to only 4 [53], which further motivates the need for a large-scale benchmarking study.…”
Section: Datasetsmentioning
confidence: 99%
“…They are attractive since they ignore the underline structure of the target model, allowing a broader spectrum of applications. In some cases disclosing the system's inner mechanism can make it vulnerable to attacks or gamification [46]. Systems gamification may be useful in certain circumstances [47], but counterproductive in others [48].…”
Section: What and How To Explain?mentioning
confidence: 99%