2022
DOI: 10.1613/jair.1.13575
|View full text |Cite
|
Sign up to set email alerts
|

On Tackling Explanation Redundancy in Decision Trees

Abstract: Decision trees (DTs) epitomize the ideal of interpretability of machine learning (ML) models. The interpretability of decision trees motivates explainability approaches by so-called intrinsic interpretability, and it is at the core of recent proposals for applying interpretable ML models in high-risk applications. The belief in DT interpretability is justified by the fact that explanations for DT predictions are generally expected to be succinct. Indeed, in the case of DTs, explanations correspond to DT paths.… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

1
48
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
1
1

Relationship

1
5

Authors

Journals

citations
Cited by 31 publications
(71 citation statements)
references
References 83 publications
(150 reference statements)
1
48
0
Order By: Relevance
“…Example 7 illustrates important limitations of DTs in terms of interpretability, and justifies recent work on explaining DTs [166][167][168]. More importantly, it has been shown that the redundancy in tree paths (i.e.…”
supporting
confidence: 67%
See 4 more Smart Citations
“…Example 7 illustrates important limitations of DTs in terms of interpretability, and justifies recent work on explaining DTs [166][167][168]. More importantly, it has been shown that the redundancy in tree paths (i.e.…”
supporting
confidence: 67%
“…Some other authors propose the use of interpretable models as the explanation itself [232,267,268]. There is by now mounting evidence [149,155,166,167,214] that even these so-called interpretable models ought to be explained with the methods described in this paper 4 .…”
Section: Introductionmentioning
confidence: 89%
See 3 more Smart Citations