2022
DOI: 10.1609/aaai.v36i5.20484
|View full text |Cite
|
Sign up to set email alerts
|

Trading Complexity for Sparsity in Random Forest Explanations

Abstract: Random forests have long been considered as powerful model ensembles in machine learning. By training multiple decision trees, whose diversity is fostered through data and feature subsampling, the resulting random forest can lead to more stable and reliable predictions than a single decision tree. This however comes at the cost of decreased interpretability: while decision trees are often easily interpretable, the predictions made by random forests are much more difficult to understand, as they involve a major… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
17
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
2
2

Relationship

1
6

Authors

Journals

citations
Cited by 14 publications
(17 citation statements)
references
References 27 publications
(24 reference statements)
0
17
0
Order By: Relevance
“…Though this test can be achieved in polynomial time for some families of classifiers f (including decision trees) [22,15], it is intractable in general. Especially, it is coNP-hard when f is a random forest [2]. Similarly, when f is a boosted tree BT , we have: Proposition 1.…”
Section: Computing Sufficient Reasonsmentioning
confidence: 97%
See 3 more Smart Citations
“…Though this test can be achieved in polynomial time for some families of classifiers f (including decision trees) [22,15], it is intractable in general. Especially, it is coNP-hard when f is a random forest [2]. Similarly, when f is a boosted tree BT , we have: Proposition 1.…”
Section: Computing Sufficient Reasonsmentioning
confidence: 97%
“…• coNP-hardness: it has been shown in [2] (Proposition 3) that deciding whether t is an abductive explanation for x given a random forest RF over Boolean attributes is coNPcomplete. Thus, it is enough to show that we can associate in polynomial time any random forest RF = {T 1 , .…”
Section: Proofs Proof Of Propositionmentioning
confidence: 99%
See 2 more Smart Citations
“…Different duality results have been obtained [158,160], which related different kinds of explanations. Practically efficient logic encodings have been devised for computing explanations for a number of families of classifiers [23,151,153,155,161,171], Compilation approaches for explainability have been studied in a number of recent works [83,84,86,88,281,282]. A number of computational complexity results, that cover the computing of one explanations but also other queries, have been proved [21,149,155,171,215].…”
Section: Introductionmentioning
confidence: 99%