2022
DOI: 10.1609/aaai.v36i4.20292
|View full text |Cite
|
Sign up to set email alerts
|

Using MaxSAT for Efficient Explanations of Tree Ensembles

Abstract: Tree ensembles (TEs) denote a prevalent machine learning model that do not offer guarantees of interpretability, that represent a challenge from the perspective of explainable artificial intelligence. Besides model agnostic approaches, recent work proposed to explain TEs with formally-defined explanations, which are computed with oracles for propositional satisfiability (SAT) and satisfiability modulo theories. The computation of explanations for TEs involves linear constraints to express the prediction. In pr… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
26
0
1

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
3
1

Relationship

4
5

Authors

Journals

citations
Cited by 25 publications
(35 citation statements)
references
References 29 publications
0
26
0
1
Order By: Relevance
“…9. A sample of references on formal explainability includes(Shih et al, 2018;Ignatiev et al, 2019a;Shih et al, 2019;Ignatiev et al, 2019b;Narodytska et al, 2019;Wolf et al, 2019;Audemard et al, 2020;Darwiche, 2020;Darwiche and Hirth, 2020;Shi et al, 2020;Rago et al, 2020;Boumazouza et al, 2020;Ignatiev et al, 2020b;Izza et al, 2020;Marques-Silva et al, 2021;Malfa et al, 2021;Huang et al, 2021b;Audemard et al, 2021;Asher et al, 2021;Cooper and Marques-Silva, 2021;Boumazouza et al, 2021;Huang et al, 2021a;Rago et al, 2021;Liu and Lorini, 2021;Wäldchen et al, 2021;Darwiche and Marquis, 2021;Blanc et al, 2021;Arenas et al, 2021;Huang et al, 2022;Ignatiev et al, 2022;Marques-Silva and Ignatiev, 2022;Gorji and Rubin, 2022). 10.…”
unclassified
“…9. A sample of references on formal explainability includes(Shih et al, 2018;Ignatiev et al, 2019a;Shih et al, 2019;Ignatiev et al, 2019b;Narodytska et al, 2019;Wolf et al, 2019;Audemard et al, 2020;Darwiche, 2020;Darwiche and Hirth, 2020;Shi et al, 2020;Rago et al, 2020;Boumazouza et al, 2020;Ignatiev et al, 2020b;Izza et al, 2020;Marques-Silva et al, 2021;Malfa et al, 2021;Huang et al, 2021b;Audemard et al, 2021;Asher et al, 2021;Cooper and Marques-Silva, 2021;Boumazouza et al, 2021;Huang et al, 2021a;Rago et al, 2021;Liu and Lorini, 2021;Wäldchen et al, 2021;Darwiche and Marquis, 2021;Blanc et al, 2021;Arenas et al, 2021;Huang et al, 2022;Ignatiev et al, 2022;Marques-Silva and Ignatiev, 2022;Gorji and Rubin, 2022). 10.…”
unclassified
“…Although recent years have witnessed a growing interest in finding explanations of machine learning (ML) models (Lipton 2018;Guidotti et al 2019;Weld and Bansal 2019;Monroe 2021), explanations have been studied from different perspectives and in different branches of AI at least since the 80s (Shanahan 1989;Falappa, Kern-Isberner, and Simari 2002;Pérez and Uzcátegui 2003), including more recently in constraint programming (Amilhastre, Fargier, and Marquis 2002;Bogaerts et al 2020;Gamba, Bogaerts, and Guns 2021). In the case of ML models, non-heuristic explanations have been studied in recent years (Shih, Choi, and Darwiche 2018;Ignatiev, Narodytska, and Marques-Silva 2019a;Shih, Choi, and Darwiche 2019;Narodytska et al 2019;Ignatiev, Narodytska, and Marques-Silva 2019b,c;Darwiche and Hirth 2020;Ignatiev et al 2020a;Ignatiev 2020;Audemard, Koriche, and Marquis 2020;Marques-Silva et al 2020;Barceló et al 2020;Ignatiev et al 2020b;Izza, Ignatiev, and Marques-Silva 2020;Wäldchen et al 2021;Izza and Marques-Silva 2021;Malfa et al 2021;Ignatiev and Marques-Silva 2021;Cooper and Marques-Silva 2021;Huang et al 2021;Audemard et al 2021;Marques-Silva and Ignatiev 2022;Ignatiev et al 2022;Shrotri et al 2022). Some of these earlier works studied explanations for classifiers represented wit...…”
Section: Related Workmentioning
confidence: 99%
“…First, similar to gradient-based methods, they require full knowledge of the original ML model. Second, although for a number of ML models these approaches are shown to be practically effective (Ignatiev, Narodytska, and Marques-Silva 2019b;Izza, Ignatiev, and Marques-Silva 2020;Marques-Silva et al 2020Izza and Marques-Silva 2021;Ignatiev and Marques-Silva 2021;Huang et al 2021;Ignatiev et al 2022;Huang et al 2022;Marques-Silva and Ignatiev 2022), formal approaches to XAI still face scalability issues in case of some other ML models (Ignatiev, Narodytska, and Marques-Silva 2019a) as formal reasoning about ML models is in general computationally expensive.…”
Section: Related Workmentioning
confidence: 99%