2022
DOI: 10.1007/978-3-031-14463-9_5
|View full text |Cite
|
Sign up to set email alerts
|

Towards Refined Classifications Driven by SHAP Explanations

Abstract: Machine Learning (ML) models are inherently approximate; as a result, the predictions of an ML model can be wrong. In applications where errors can jeopardize a company's reputation, human experts often have to manually check the alarms raised by the ML models by hand, as wrong or delayed decisions can have a significant business impact. These experts often use interpretable ML tools for the verification of predictions. However, post-prediction verification is also costly. In this paper, we hypothesize that th… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
2

Relationship

1
4

Authors

Journals

citations
Cited by 7 publications
(1 citation statement)
references
References 25 publications
0
1
0
Order By: Relevance
“…NN classifier is used as an indirect method to evaluate the performance of ProtoDASH, while our work differs by not evaluating ProtoDASH but employing it to improve classification model performance. Our use of ProtoDASH is in line with the growing literature of the articulation of explainable AI techniques to ML pipeline [17], [18].…”
Section: A Protodashmentioning
confidence: 58%
“…NN classifier is used as an indirect method to evaluate the performance of ProtoDASH, while our work differs by not evaluating ProtoDASH but employing it to improve classification model performance. Our use of ProtoDASH is in line with the growing literature of the articulation of explainable AI techniques to ML pipeline [17], [18].…”
Section: A Protodashmentioning
confidence: 58%