2021
DOI: 10.1007/978-3-030-84060-0_19
|View full text |Cite
|
Sign up to set email alerts
|

On the Trustworthiness of Tree Ensemble Explainability Methods

Abstract: The recent increase in the deployment of machine learning models in critical domains such as healthcare, criminal justice, and finance has highlighted the need for trustworthy methods that can explain these models to stakeholders. Feature importance methods (e.g. gain and SHAP) are among the most popular explainability methods used to address this need. For any explainability technique to be trustworthy and meaningful, it has to provide an explanation that is accurate and stable. Although the stability of loca… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

1
2
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(3 citation statements)
references
References 28 publications
(29 reference statements)
1
2
0
Order By: Relevance
“…There are also minor fluctuations in SHAP and LIME, although they are visually smoother. Similar to findings from existing work [10], the faithfulness of these XAI methods are inconsistent on decision tree models. The inter-rater agreements for the survey questions are reported in Table 4.…”
Section: Resultssupporting
confidence: 70%
See 1 more Smart Citation
“…There are also minor fluctuations in SHAP and LIME, although they are visually smoother. Similar to findings from existing work [10], the faithfulness of these XAI methods are inconsistent on decision tree models. The inter-rater agreements for the survey questions are reported in Table 4.…”
Section: Resultssupporting
confidence: 70%
“…a decrease in accuracy or change in polarity) [6] [9]. There are many variations -for example, it is also possible to evaluate faithfulness with synthetic data and known feature importance [10].…”
Section: Explainable Aimentioning
confidence: 99%
“…Because of the importance of explicability in machine learning models and the success of GBDT as a predictive model, SHAP explicability have been used many applications and studies. However the SHAP explicability over the GBDT seem to have lack of accuracy and stability in their explicability values [29,30,31].…”
Section: Introductionmentioning
confidence: 99%