The platform will undergo maintenance on Sep 14 at about 7:45 AM EST and will be unavailable for approximately 2 hours.
2021
DOI: 10.48550/arxiv.2104.04144
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Individual Explanations in Machine Learning Models: A Survey for Practitioners

Alfredo Carrillo,
Luis F. Cantú,
Alejandro Noriega

Abstract: In recent years, the use of sophisticated statistical models that influence decisions in domains of high societal relevance is on the rise. Although these models can often bring substantial improvements in the accuracy and efficiency of organizations, many governments, institutions, and companies are reluctant to their adoption as their output is often difficult to explain in human-interpretable ways. Hence, these models are often regarded as black-boxes, in the sense that their internal mechanisms can be opaq… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
10
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
3
1
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(10 citation statements)
references
References 22 publications
0
10
0
Order By: Relevance
“…Statistical models may have advantages over common interpretable methods such as SHAP because they provide generalizable interpretation. Specifically, the SHAP values change according to the distribution of the given input data [18], making it difficult to generalize the interpretation. However, since pvalues and odd ratios are constant regardless of the given input data, logistic regression can provide a generalizable interpretation [43].…”
Section: Theoretical Discussion Of Contributionsmentioning
confidence: 99%
See 3 more Smart Citations
“…Statistical models may have advantages over common interpretable methods such as SHAP because they provide generalizable interpretation. Specifically, the SHAP values change according to the distribution of the given input data [18], making it difficult to generalize the interpretation. However, since pvalues and odd ratios are constant regardless of the given input data, logistic regression can provide a generalizable interpretation [43].…”
Section: Theoretical Discussion Of Contributionsmentioning
confidence: 99%
“…The last limitation is the lack of flexibility of the post-hoc interpretation, which is caused by the dependence of modelspecific interpretation on the model structure [18]. Specifically, because the ABP trend shapes were generated based on 30 s of ABP in this study, the association between the hypotension development and ABP trends at different time lengths such as 40 s is not interpretable.…”
Section: E Limitations and Future Workmentioning
confidence: 98%
See 2 more Smart Citations
“…Modelspecific methods focus on constructing a transparent mechanism that allows intrinsic interpretation of the model itself. Examples include variable importance computed from boosting or bagging machine learning algorithms and feature maps extracted from certain layers or weights in a neural network [19], [20]. By contrast, model-agnostic methods are applied independently of the model by approximating the relationship between the input and output data.…”
Section: B Xai and Hypotension Prediction Model Interpretabilitymentioning
confidence: 99%