The platform will undergo maintenance on Sep 14 at about 7:45 AM EST and will be unavailable for approximately 2 hours.
2021
DOI: 10.48550/arxiv.2111.09121
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Uncertainty Quantification of Surrogate Explanations: an Ordinal Consensus Approach

Abstract: Explainability of black-box machine learning models is crucial, in particular when deployed in critical applications such as medicine or autonomous cars. Existing approaches produce explanations for the predictions of models, however, how to assess the quality and reliability of such explanations remains an open question. In this paper we take a step further in order to provide the practitioner with tools to judge the trustworthiness of an explanation. To this end, we produce estimates of the uncertainty of a … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(2 citation statements)
references
References 21 publications
0
2
0
Order By: Relevance
“…Related Work While our work is unique in that we do not assume access to the underlying ML model, our approach builds upon previous methods that attempt to quantify the uncertainty of explanations. These include methods that apply standard sampling error techniques to explainability [30] as well as non-parametric approaches [27,35]. While [27] also employs bootstrap methods, our approach is different in that we focus on gradient estimation of the ML model instead of rank orders of feature importance.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Related Work While our work is unique in that we do not assume access to the underlying ML model, our approach builds upon previous methods that attempt to quantify the uncertainty of explanations. These include methods that apply standard sampling error techniques to explainability [30] as well as non-parametric approaches [27,35]. While [27] also employs bootstrap methods, our approach is different in that we focus on gradient estimation of the ML model instead of rank orders of feature importance.…”
Section: Introductionmentioning
confidence: 99%
“…These include methods that apply standard sampling error techniques to explainability [30] as well as non-parametric approaches [27,35]. While [27] also employs bootstrap methods, our approach is different in that we focus on gradient estimation of the ML model instead of rank orders of feature importance. Additionally, [35] focuses on quantifying the uncertainty in explanations of convolutional neural networks, while our method is model-agnostic.…”
Section: Introductionmentioning
confidence: 99%