Individual Explanations in Machine Learning Models: A Survey for Practitioners
Alfredo Carrillo,
Luis F. Cantú,
Alejandro Noriega
Abstract:In recent years, the use of sophisticated statistical models that influence decisions in domains of high societal relevance is on the rise. Although these models can often bring substantial improvements in the accuracy and efficiency of organizations, many governments, institutions, and companies are reluctant to their adoption as their output is often difficult to explain in human-interpretable ways. Hence, these models are often regarded as black-boxes, in the sense that their internal mechanisms can be opaq… Show more
“…Statistical models may have advantages over common interpretable methods such as SHAP because they provide generalizable interpretation. Specifically, the SHAP values change according to the distribution of the given input data [18], making it difficult to generalize the interpretation. However, since pvalues and odd ratios are constant regardless of the given input data, logistic regression can provide a generalizable interpretation [43].…”
Section: Theoretical Discussion Of Contributionsmentioning
confidence: 99%
“…The last limitation is the lack of flexibility of the post-hoc interpretation, which is caused by the dependence of modelspecific interpretation on the model structure [18]. Specifically, because the ABP trend shapes were generated based on 30 s of ABP in this study, the association between the hypotension development and ABP trends at different time lengths such as 40 s is not interpretable.…”
Section: E Limitations and Future Workmentioning
confidence: 98%
“…Modelspecific methods focus on constructing a transparent mechanism that allows intrinsic interpretation of the model itself. Examples include variable importance computed from boosting or bagging machine learning algorithms and feature maps extracted from certain layers or weights in neural networks [17], [18]. In contrast, model-agnostic methods are applied independently from the model by approximating the relationship between input and output data.…”
Section: B Explainable Ai and Hypotension Prediction Model Interpreta...mentioning
confidence: 99%
“…In contrast, model-agnostic methods are applied independently from the model by approximating the relationship between input and output data. Shapley additive explanation (SHAP) and local interpretable model-agnostic explanations (LIME) are representative examples [17], [18].…”
Section: B Explainable Ai and Hypotension Prediction Model Interpreta...mentioning
<p>Monitoring arterial blood pressure (ABP) in anesthetized patients is crucial for preventing hypotension, which can lead to adverse clinical outcomes. Thus, several efforts have been made to develop an artificial intelligence-based hypotension prediction index. Nevertheless, the use of these indices is limited because they may not provide a convincing interpretation of the association between predictors and hypotension. Herein, we developed an interpretable deep learning model that forecasts hypotension occurrences 10 min before a given 90 s ABP record. Internal and external validations of model performance reported the area under the receiver operating characteristic curve (AUC) as 0.9145 and 0.9035, respectively. Furthermore, the hypotension prediction mechanism can be physiologically interpreted by using predictors representing ABP trends that are automatically generated in the proposed model. Ultimately, we demonstrate high-applicability of a deep learning model that has a high accuracy performance and provides an interpretation of the association between ABP trends and hypotension in clinical practices.</p>
“…Statistical models may have advantages over common interpretable methods such as SHAP because they provide generalizable interpretation. Specifically, the SHAP values change according to the distribution of the given input data [18], making it difficult to generalize the interpretation. However, since pvalues and odd ratios are constant regardless of the given input data, logistic regression can provide a generalizable interpretation [43].…”
Section: Theoretical Discussion Of Contributionsmentioning
confidence: 99%
“…The last limitation is the lack of flexibility of the post-hoc interpretation, which is caused by the dependence of modelspecific interpretation on the model structure [18]. Specifically, because the ABP trend shapes were generated based on 30 s of ABP in this study, the association between the hypotension development and ABP trends at different time lengths such as 40 s is not interpretable.…”
Section: E Limitations and Future Workmentioning
confidence: 98%
“…Modelspecific methods focus on constructing a transparent mechanism that allows intrinsic interpretation of the model itself. Examples include variable importance computed from boosting or bagging machine learning algorithms and feature maps extracted from certain layers or weights in neural networks [17], [18]. In contrast, model-agnostic methods are applied independently from the model by approximating the relationship between input and output data.…”
Section: B Explainable Ai and Hypotension Prediction Model Interpreta...mentioning
confidence: 99%
“…In contrast, model-agnostic methods are applied independently from the model by approximating the relationship between input and output data. Shapley additive explanation (SHAP) and local interpretable model-agnostic explanations (LIME) are representative examples [17], [18].…”
Section: B Explainable Ai and Hypotension Prediction Model Interpreta...mentioning
<p>Monitoring arterial blood pressure (ABP) in anesthetized patients is crucial for preventing hypotension, which can lead to adverse clinical outcomes. Thus, several efforts have been made to develop an artificial intelligence-based hypotension prediction index. Nevertheless, the use of these indices is limited because they may not provide a convincing interpretation of the association between predictors and hypotension. Herein, we developed an interpretable deep learning model that forecasts hypotension occurrences 10 min before a given 90 s ABP record. Internal and external validations of model performance reported the area under the receiver operating characteristic curve (AUC) as 0.9145 and 0.9035, respectively. Furthermore, the hypotension prediction mechanism can be physiologically interpreted by using predictors representing ABP trends that are automatically generated in the proposed model. Ultimately, we demonstrate high-applicability of a deep learning model that has a high accuracy performance and provides an interpretation of the association between ABP trends and hypotension in clinical practices.</p>
“…Modelspecific methods focus on constructing a transparent mechanism that allows intrinsic interpretation of the model itself. Examples include variable importance computed from boosting or bagging machine learning algorithms and feature maps extracted from certain layers or weights in a neural network [19], [20]. By contrast, model-agnostic methods are applied independently of the model by approximating the relationship between the input and output data.…”
Section: B Xai and Hypotension Prediction Model Interpretabilitymentioning
<p>Monitoring arterial blood pressure (ABP) in anesthetized patients is crucial for preventing hypotension, which can lead to adverse clinical outcomes. Thus, several efforts have been made to develop an artificial intelligence-based hypotension prediction index. Nevertheless, the use of these indices is limited because they may not provide a convincing interpretation of the association between predictors and hypotension. Herein, we developed an interpretable deep learning model that forecasts hypotension occurrences 10 min before a given 90 s ABP record. Internal and external validations of model performance reported the area under the receiver operating characteristic curve (AUC) as 0.9145 and 0.9035, respectively. Furthermore, the hypotension prediction mechanism can be physiologically interpreted by using predictors representing ABP trends that are automatically generated in the proposed model. Ultimately, we demonstrate high-applicability of a deep learning model that has a high accuracy performance and provides an interpretation of the association between ABP trends and hypotension in clinical practices.</p>
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.