2022
DOI: 10.25300/misq/2022/16749
|View full text |Cite
|
Sign up to set email alerts
|

Explaining Data-Driven Decisions made by AI Systems: The Counterfactual Approach

Abstract: We examine counterfactual explanations for explaining the decisions made by model-based AI systems. The counterfactual approach we consider defines an explanation as a set of the system’s data inputs that causally drives the decision (i.e., changing the inputs in the set changes the decision) and is irreducible (i.e., changing any subset of the inputs does not change the decision). We (1) demonstrate how this framework may be used to provide explanations for decisions made by general data-driven AI systems tha… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
1
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 9 publications
(2 citation statements)
references
References 0 publications
0
1
0
Order By: Relevance
“…As described in Section 1, SHAP values represent a feature-based XAI method that is both among the most widespread in practice (Bhatt et al, 2020) and presumably necessary to comply with upcoming regulation (Goodman and Flaxman, 2017). While SHAP values seem a natural choice for our study, we acknowledge that there exist other relevant forms of explanations for AI systems, such as example-based explanations (Mittelstadt et al, 2019) or counterfactual explanations (Fernández-Loría et al, 2022). While it is not within the scope of this paper to investigate and compare the relationship between various forms of explanations and the delegation of authority, future research should examine whether and why the effects we observed would differ if users were provided with these forms.…”
Section: Future Research Directionmentioning
confidence: 99%
“…As described in Section 1, SHAP values represent a feature-based XAI method that is both among the most widespread in practice (Bhatt et al, 2020) and presumably necessary to comply with upcoming regulation (Goodman and Flaxman, 2017). While SHAP values seem a natural choice for our study, we acknowledge that there exist other relevant forms of explanations for AI systems, such as example-based explanations (Mittelstadt et al, 2019) or counterfactual explanations (Fernández-Loría et al, 2022). While it is not within the scope of this paper to investigate and compare the relationship between various forms of explanations and the delegation of authority, future research should examine whether and why the effects we observed would differ if users were provided with these forms.…”
Section: Future Research Directionmentioning
confidence: 99%
“…Here, it must be determined how the feedback could be presented so that recipients understand that the feedback does not have to represent the truth. IS research has recently explored consequences of making underlying decisions and outcomes of artificial intelligence understandable for users (e.g., Fernández-Loría et al, 2022;Storey et al, 2022), and this might therefore be an interesting research topic. Another promising research avenue would be to study machine learning approaches that could make the initialization phase of such feedback systems obsolete.…”
Section: Limitations and Future Researchmentioning
confidence: 99%