2017
DOI: 10.48550/arxiv.1707.01154
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Interpretable & Explorable Approximations of Black Box Models

Abstract: We propose Black Box Explanations through Transparent Approximations (BETA), a novel model agnostic framework for explaining the behavior of any black-box classi er by simultaneously optimizing for delity to the original model and interpretability of the explanation. To this end, we develop a novel objective function which allows us to learn (with optimality guarantees), a small number of compact decision sets each of which explains the behavior of the black box model in unambiguous, well-de ned regions of fea… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
76
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
5
3
2

Relationship

0
10

Authors

Journals

citations
Cited by 54 publications
(76 citation statements)
references
References 6 publications
0
76
0
Order By: Relevance
“…HCI research, and more broadly human-subject studies, are key to evaluating XAI in the context of use [30], identifying where it falls short, and informing human-centered solutions. While many studies showed positive results that XAI techniques can improve people's understanding of models [46,59,68,87], in this section we draw attention to a few pitfalls of XAI based on recent HCI research. 1 The difference between Why Not and How to Be That can be subtle and context-dependent.…”
Section: Pitfalls Of Xai: Minding the Gaps Between Algorithmic Explan...mentioning
confidence: 99%
“…HCI research, and more broadly human-subject studies, are key to evaluating XAI in the context of use [30], identifying where it falls short, and informing human-centered solutions. While many studies showed positive results that XAI techniques can improve people's understanding of models [46,59,68,87], in this section we draw attention to a few pitfalls of XAI based on recent HCI research. 1 The difference between Why Not and How to Be That can be subtle and context-dependent.…”
Section: Pitfalls Of Xai: Minding the Gaps Between Algorithmic Explan...mentioning
confidence: 99%
“…While in recent years research on rule-based models has grown substantially, the problem of how to visualize the rules has gain COMPARATIVELY little attention. Decision rules are mostly presented as plain textual if-then statements, independently of whether the rules are used as model explanations [10], [28], [29], [30] or are used directly as interpretable predictive models [25], [26], [31]. For rules extracted from a decision tree [2], tree structures have been studied widely in the visualization research community (a gallery of tree visualization techniques can be found at treevis.net [32]) but only with little focus to structures designed to explore model behavior.…”
Section: Visual Representations Of Rules and Decision Treesmentioning
confidence: 99%
“…Local-level feature explanations refers to the interpretations used to justify why the model made a specific decision for a single instance. Many existing approaches (Lundberg and Lee, 2017;Lakkaraju et al, 2017) can be adopted for local interpretations. Among the many existing explanation-generating methods, we adopted LIME (Ribeiro et al, 2016) as an example way for achieving local-level feature importance in the proposed framework.…”
Section: Local-level Explainable Feature Contributionsmentioning
confidence: 99%