2020
DOI: 10.1007/s11634-020-00418-3
|View full text |Cite
|
Sign up to set email alerts
|

A comparison of instance-level counterfactual explanation algorithms for behavioral and textual data: SEDC, LIME-C and SHAP-C

Abstract: Predictive systems based on high-dimensional behavioral and textual data have serious comprehensibility and transparency issues: linear models require investigating thousands of coefficients, while the opaqueness of nonlinear models makes things worse. Counterfactual explanations are becoming increasingly popular for generating insight into model predictions. This study aligns the recently proposed linear interpretable model-agnostic explainer and Shapley additive explanations with the notion of counterfactual… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

2
47
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
5

Relationship

2
8

Authors

Journals

citations
Cited by 62 publications
(56 citation statements)
references
References 33 publications
(66 reference statements)
2
47
0
Order By: Relevance
“…In medical applications, the explainability of predictions is paramount [ 49 ], which is the reason for the trending research topic of explainable artificial intelligence (XAI). Two of the most popular methods for explaining the predictions of traditional black-box models are SHAP (Shapley Additive exPlanation) and LIME (Local Interpretable Model-agnostic Explanations), which also have open-sourced libraries that are integrated into other machine learning toolkits [ 50 ].…”
Section: Methodsmentioning
confidence: 99%
“…In medical applications, the explainability of predictions is paramount [ 49 ], which is the reason for the trending research topic of explainable artificial intelligence (XAI). Two of the most popular methods for explaining the predictions of traditional black-box models are SHAP (Shapley Additive exPlanation) and LIME (Local Interpretable Model-agnostic Explanations), which also have open-sourced libraries that are integrated into other machine learning toolkits [ 50 ].…”
Section: Methodsmentioning
confidence: 99%
“…2014; Ramon et al 2020), or network data (Óskarsdóttir et al 2020), which are not captured appropriately by standard implementations of these models. This calls for new mathematical optimization formulations and/or numerical solution approaches to address these complexities adequately.…”
Section: Challenges For the Futurementioning
confidence: 99%
“…Counterfactuals provide intuitive and human-friendly explanations. Therefore, a lot of corresponding methods have been proposed [36]- [39], including counterfactual modifications of LIME [40], [41].…”
Section: Related Workmentioning
confidence: 99%