The platform will undergo maintenance on Sep 14 at about 7:45 AM EST and will be unavailable for approximately 2 hours.
2019 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE) 2019
DOI: 10.1109/fuzz-ieee.2019.8858846
|View full text |Cite
|
Sign up to set email alerts
|

LEAFAGE: Example-based and Feature importance-based Explanations for Black-box ML models

Abstract: Explainable Artificial Intelligence (XAI) is an emergent research field which tries to cope with the lack of transparency of AI systems, by providing human understandable explanations for the underlying Machine Learning models. This work presents a new explanation extraction method called LEAFAGE. Explanations are provided both in terms of feature importance and of similar classification examples. The latter is a well known strategy for problem solving and justification in social science. LEAFAGE leverages on … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
6
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
3
3
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 21 publications
(6 citation statements)
references
References 13 publications
0
6
0
Order By: Relevance
“…Others try to produce explanations more grounded in the features of the data, such as a ranking of features important for the prediction or a decision-tree approximating the model's logic [16,33,46]. However, a growing body of work that has tried to empirically measure the efficacy of many of these methods has shown that they often do not actually affect or improve human decision-making [2,8,30,43], and in practice are primarily used for internal model debugging [6].…”
Section: Related Work 21 Interpretability Methods For Human Understan...mentioning
confidence: 99%
“…Others try to produce explanations more grounded in the features of the data, such as a ranking of features important for the prediction or a decision-tree approximating the model's logic [16,33,46]. However, a growing body of work that has tried to empirically measure the efficacy of many of these methods has shown that they often do not actually affect or improve human decision-making [2,8,30,43], and in practice are primarily used for internal model debugging [6].…”
Section: Related Work 21 Interpretability Methods For Human Understan...mentioning
confidence: 99%
“…2) CF model agnostic interpretability methods: Among other model-agnostic approaches that were tested with reported results on tree ensemble models, we can mention the LORE [23], LEAFAGE [24], and CLEAR [25] approaches. The LORE approach uses local interpretable surrogates in order to derive sets of CF rules.…”
Section: B Cf Example-based Interpretability Methodsmentioning
confidence: 99%
“…It was important to evaluate the visualization from the perspective of the end-users, an aspect often brushed over in XAI studies [56,59]. The explanation was specifically designed for DNA experts within the context of NOC estimation, so we invited DNA experts from the NFI to participate in a user study.…”
Section: Set-up User Studymentioning
confidence: 99%