Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence 2021
DOI: 10.24963/ijcai.2021/356
|View full text |Cite
|
Sign up to set email alerts
|

On Explaining Random Forests with SAT

Abstract: Random Forest (RFs) are among the most widely used Machine Learning (ML) classifiers. Even though RFs are not interpretable, there are no dedicated non-heuristic approaches for computing explanations of RFs. Moreover, there is recent work on polynomial algorithms for explaining ML models, including naive Bayes classifiers. Hence, one question is whether finding explanations of RFs can be solved in polynomial time. This paper answers this question negatively, by proving that computing one PI-explanation of a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
33
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
1
1

Relationship

3
4

Authors

Journals

citations
Cited by 33 publications
(44 citation statements)
references
References 46 publications
0
33
0
Order By: Relevance
“…Y). This observation is at the core of the algorithms proposed in recent years for computing AXp's and CXp's of a growing range of families of classifiers [93,94,139,118,98,119,100,117,83,89,81].…”
Section: Cxp(y)mentioning
confidence: 92%
See 3 more Smart Citations
“…Y). This observation is at the core of the algorithms proposed in recent years for computing AXp's and CXp's of a growing range of families of classifiers [93,94,139,118,98,119,100,117,83,89,81].…”
Section: Cxp(y)mentioning
confidence: 92%
“…Formal explanation 6 approaches have been studied in a growing body of research in recent years [169,93,170,94,139,188,14,57,58,168,152,44,92,118,98,119,100,117,83,13,89,12,55,45,81,151,114,185,59,42,9,82,88,121,74]. Concretely, this paper uses the definition of abductive explanation [93] (AXp), which corresponds to a PI-explanation [169] in the case of boolean classifiers.…”
Section: Formal Explainabilitymentioning
confidence: 99%
See 2 more Smart Citations
“…Although recent years have witnessed a growing interest in finding explanations of machine learning (ML) models (Lipton 2018;Guidotti et al 2019;Weld and Bansal 2019;Monroe 2021), explanations have been studied from different perspectives and in different branches of AI at least since the 80s (Shanahan 1989;Falappa, Kern-Isberner, and Simari 2002;Pérez and Uzcátegui 2003), including more recently in constraint programming (Amilhastre, Fargier, and Marquis 2002;Bogaerts et al 2020;Gamba, Bogaerts, and Guns 2021). In the case of ML models, non-heuristic explanations have been studied in recent years (Shih, Choi, and Darwiche 2018;Ignatiev, Narodytska, and Marques-Silva 2019a;Shih, Choi, and Darwiche 2019;Narodytska et al 2019;Ignatiev, Narodytska, and Marques-Silva 2019b,c;Darwiche and Hirth 2020;Ignatiev et al 2020a;Ignatiev 2020;Audemard, Koriche, and Marquis 2020;Marques-Silva et al 2020;Barceló et al 2020;Ignatiev et al 2020b;Izza, Ignatiev, and Marques-Silva 2020;Wäldchen et al 2021;Izza and Marques-Silva 2021;Malfa et al 2021;Ignatiev and Marques-Silva 2021;Cooper and Marques-Silva 2021;Huang et al 2021;Audemard et al 2021;Marques-Silva and Ignatiev 2022;Ignatiev et al 2022;Shrotri et al 2022). Some of these earlier works studied explanations for classifiers represented wit...…”
Section: Related Workmentioning
confidence: 99%