Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence 2022
DOI: 10.24963/ijcai.2022/91
|View full text |Cite
|
Sign up to set email alerts
|

On Preferred Abductive Explanations for Decision Trees and Random Forests

Abstract: Abductive explanations take a central place in eXplainable Artificial Intelligence (XAI) by clarifying with few features the way data instances are classified. However, instances may have exponentially many minimum-size abductive explanations, and this source of complexity holds even for ``intelligible'' classifiers, such as decision trees. When the number of such abductive explanations is huge, computing one of them, only, is often not informative enough. Especially, better explanations than the one that… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
12
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
2
1

Relationship

1
5

Authors

Journals

citations
Cited by 12 publications
(12 citation statements)
references
References 0 publications
0
12
0
Order By: Relevance
“…Especially, user preferences, when available, can be used to select explanations of improved quality. It has been shown that the exploitation of user preferences may drastically reduce the number of abductive explanations (60). It would be interesting to evaluate the extent to which handling user preferences (of various kinds) impacts the explanatory importance of features when dealing with decision trees.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Especially, user preferences, when available, can be used to select explanations of improved quality. It has been shown that the exploitation of user preferences may drastically reduce the number of abductive explanations (60). It would be interesting to evaluate the extent to which handling user preferences (of various kinds) impacts the explanatory importance of features when dealing with decision trees.…”
Section: Discussionmentioning
confidence: 99%
“…Enumerating minimum-size sufficient reasons An approach to synthesizing the set of sufficient reasons consists in focusing on the minimum-size ones. Indeed, though the set of minimum-size sufficient reasons for an instance given a decision tree can also be exponentially large (60), the number of minimum-size sufficient reasons cannot exceed the number of sufficient reasons, and it can be significantly lower in practice.…”
Section: Computing All Sufficient Reasonsmentioning
confidence: 99%
“…Similarly to the deterministic case, a PAXp is a subset-minimal weak AXp. A set X ⊆ F such that (22) holds is also referred to as relevant set. In the next section, we will illustrate how PAXp's are computed in the case of DTs.…”
Section: Problem Formulationmentioning
confidence: 99%
“…A key observation is that the feature values that make a path consistent are disjoint from the values that make other paths consistent. This observation allows us to compute the models consistent with each path and, as a result, to compute (22). Let R k ∈ R represent some path in the decision tree.…”
Section: Probabilistic Explanations For Decision Treesmentioning
confidence: 99%
See 1 more Smart Citation