The platform will undergo maintenance on Sep 14 at about 7:45 AM EST and will be unavailable for approximately 2 hours.
Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency 2020
DOI: 10.1145/3351095.3372850
|View full text |Cite
|
Sign up to set email alerts
|

Explaining machine learning classifiers through diverse counterfactual explanations

Abstract: Post-hoc explanations of machine learning models are crucial for people to understand and act on algorithmic predictions. An intriguing class of explanations is through counterfactuals, hypothetical examples that show people how to obtain a different prediction. We posit that effective counterfactual explanations should satisfy two properties: feasibility of the counterfactual actions given user context and constraints, and diversity among the counterfactuals presented. To this end, we propose a framework for … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
422
0
3

Year Published

2020
2020
2024
2024

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 632 publications
(518 citation statements)
references
References 28 publications
2
422
0
3
Order By: Relevance
“…For the first one, we compared MOC -once with and once without our proposed strategies for initialization and mutation -with 'DiCE' by Mothilal et al [24], 'Recourse' by Ustun et al [33] and 'Tweaking' by Tolomei et al [32]. We chose DiCE, Recourse and Tweaking because they are implemented in general open source code libraries.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…For the first one, we compared MOC -once with and once without our proposed strategies for initialization and mutation -with 'DiCE' by Mothilal et al [24], 'Recourse' by Ustun et al [33] and 'Tweaking' by Tolomei et al [32]. We chose DiCE, Recourse and Tweaking because they are implemented in general open source code libraries.…”
Section: Methodsmentioning
confidence: 99%
“…A model-specific approach was proposed by Wachter et al [35], who also introduced and formalized the concept of counterfactuals in predictive modeling. Like many model-specific methods [15,20,24,28,33] their approach is limited to differentiable models. The approach of Tolomei et al [32] generates explanations for tree-based ensemble binary classifiers.…”
Section: Related Workmentioning
confidence: 99%
“…In applications to pathologic image analysis, an attention mechanism was used to visualize epithelial cell areas [57]. Arbitrary generated counterfactual examples can be used to describe how input change affects to model output [58]. Case-based reasoning [59,60] can be combined with the interpretation target model to demonstrate consistency between model output and reasoning output of the same input [61].…”
Section: Limitations Of Current Computer-aided Pathologymentioning
confidence: 99%
“…Comparing the data values to the density distributions depicted by the shaded area can help identify anomalies and derive hypotheses on why and how the model produces a given prediction. It is also worth noticing that the visualization interface can accommodate any other counterfactual generation methods [9,16,21,22].…”
Section: Visual Interfacementioning
confidence: 99%