2019
DOI: 10.48550/arxiv.1906.01297
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Concept Tree: High-Level Representation of Variables for More Interpretable Surrogate Decision Trees

Abstract: Interpretable surrogates of black-box predictors trained on high-dimensional tabular datasets can struggle to generate comprehensible explanations in the presence of correlated variables. We propose a model-agnostic interpretable surrogate that provides global and local explanations of black-box classifiers to address this issue. We introduce the idea of concepts as intuitive groupings of variables that are either defined by a domain expert or automatically discovered using correlation coefficients. Concepts a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(4 citation statements)
references
References 11 publications
(17 reference statements)
0
4
0
Order By: Relevance
“…C4.5 uses interval-based splitting points, and generates shallower but wider trees compared to TREPAN. Concept Tree (Renard et al 2019). Concept tree is a recent extension of TREPAN by Renard et al (2019) that adds automatic grouping of correlated features into the candidate concepts to use for the tree nodes.…”
Section: Object Of Explanationmentioning
confidence: 99%
See 2 more Smart Citations
“…C4.5 uses interval-based splitting points, and generates shallower but wider trees compared to TREPAN. Concept Tree (Renard et al 2019). Concept tree is a recent extension of TREPAN by Renard et al (2019) that adds automatic grouping of correlated features into the candidate concepts to use for the tree nodes.…”
Section: Object Of Explanationmentioning
confidence: 99%
“…Concept Tree (Renard et al 2019). Concept tree is a recent extension of TREPAN by Renard et al (2019) that adds automatic grouping of correlated features into the candidate concepts to use for the tree nodes.…”
Section: Object Of Explanationmentioning
confidence: 99%
See 1 more Smart Citation
“…Indeed, CA-based explanation methods construct the explanation based on human-defined concepts rather than representing the inputs based on features and internal model (activation) states. Hence, this idea of high-level features might be more familiar to humans, that can be more likely to accept it (Hartmann et al 2022;Renard et al 2019). Formally, given a set of images belonging to a concept [x (1) , x (2) , .…”
Section: Concept Attributionmentioning
confidence: 99%