2022
DOI: 10.1609/aaai.v36i5.20514
|View full text |Cite
|
Sign up to set email alerts
|

Tractable Explanations for d-DNNF Classifiers

Abstract: Compilation into propositional languages finds a growing number of practical uses, including in constraint programming, diagnosis and machine learning (ML), among others. One concrete example is the use of propositional languages as classifiers, and one natural question is how to explain the predictions made. This paper shows that for classifiers represented with some of the best-known propositional languages, different kinds of explanations can be computed in polynomial time. These languages include determini… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
42
0
1

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
4

Relationship

4
4

Authors

Journals

citations
Cited by 25 publications
(43 citation statements)
references
References 40 publications
0
42
0
1
Order By: Relevance
“…These results were further extended in more recent work (Cooper and Marques-Silva 2021). Finally, more recent results showed that classifiers represented with propositional languages (Darwiche and Marquis 2002) can be explained efficiently for a broad class of languages (Huang et al 2021a(Huang et al , 2022. Concretely, classifiers represented with d-DNNF (or with any strictly more succinct language) can be explained in polynomial time.…”
Section: Tractable Explanationsmentioning
confidence: 99%
See 2 more Smart Citations
“…These results were further extended in more recent work (Cooper and Marques-Silva 2021). Finally, more recent results showed that classifiers represented with propositional languages (Darwiche and Marquis 2002) can be explained efficiently for a broad class of languages (Huang et al 2021a(Huang et al , 2022. Concretely, classifiers represented with d-DNNF (or with any strictly more succinct language) can be explained in polynomial time.…”
Section: Tractable Explanationsmentioning
confidence: 99%
“…Concretely, classifiers represented with d-DNNF (or with any strictly more succinct language) can be explained in polynomial time. The same work (Huang et al 2021a(Huang et al , 2022 also studied general decision functions (GDFs). GDFs associate a boolean function κ i with each class c i ∈ K, and such that the functions {κ 1 , .…”
Section: Tractable Explanationsmentioning
confidence: 99%
See 1 more Smart Citation
“…First, similar to gradient-based methods, they require full knowledge of the original ML model. Second, although for a number of ML models these approaches are shown to be practically effective (Ignatiev, Narodytska, and Marques-Silva 2019b;Izza, Ignatiev, and Marques-Silva 2020;Marques-Silva et al 2020Izza and Marques-Silva 2021;Ignatiev and Marques-Silva 2021;Huang et al 2021;Ignatiev et al 2022;Huang et al 2022;Marques-Silva and Ignatiev 2022), formal approaches to XAI still face scalability issues in case of some other ML models (Ignatiev, Narodytska, and Marques-Silva 2019a) as formal reasoning about ML models is in general computationally expensive.…”
Section: Related Workmentioning
confidence: 99%
“…In this paper we measure interpretability in terms of the overall succinctness of the information provided by an ML model to justify a given prediction. Moreover, and building on earlier work, we equate explanations with the so-called abductive explanations (AXps) (Shih, Choi, and Darwiche 2018;Ignatiev, Narodytska, and Marques-Silva 2019a,b;Darwiche and Hirth 2020;Izza, Ignatiev, and Marques-Silva 2020;Ignatiev et al 2020;Barceló et al 2020;Marques-Silva et al 2020, 2021Huang et al 2021;Marques-Silva and Ignatiev 2022;Huang et al 2022), i.e. subset-minimal sets of feature-value pairs that are sufficient for the prediction.…”
Section: Introductionmentioning
confidence: 99%