2021
DOI: 10.1007/s10916-021-01736-5
|View full text |Cite
|
Sign up to set email alerts
|

An Explainable Artificial Intelligence Framework for the Deterioration Risk Prediction of Hepatitis Patients

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
23
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 61 publications
(24 citation statements)
references
References 18 publications
1
23
0
Order By: Relevance
“…Through our understandable prediction and feature selection phase, it is possible to determine which features of datasets were more important in prediction. This explainable COVID-19 disease diagnosis model has higher transparency and explainability than previous black box methods [ [67] , [68] , [69] , [70] , [103] , [104] , [105] ] that can improve the acceptance rate and trustworthy of intelligent model for physicians.…”
Section: Discussionmentioning
confidence: 97%
See 1 more Smart Citation
“…Through our understandable prediction and feature selection phase, it is possible to determine which features of datasets were more important in prediction. This explainable COVID-19 disease diagnosis model has higher transparency and explainability than previous black box methods [ [67] , [68] , [69] , [70] , [103] , [104] , [105] ] that can improve the acceptance rate and trustworthy of intelligent model for physicians.…”
Section: Discussionmentioning
confidence: 97%
“…In overall, we noticed that in the diagnosis of COVID-19, complex machine-learning models such as deep learning perform better than simple models such as linear regression and decision trees. Nevertheless, the deep learning-based approaches proposed in previous works were indeed black boxes that did not explain their prediction in a manner a human could understand [ [67] , [68] , [69] , [70] ]. It is therefore important to endow the highly performing deep-learning models with explainability and interpretability ability to accommodate the new EU data protection directive and ensure their widespread adoption by healthcare authorities.…”
Section: Introductionmentioning
confidence: 99%
“…Mehbodniya et al ( 8 ) used machine learning to classify fetal health from cardiotocographic data. Peng et al ( 9 ) used an explainable artificial intelligence framework to predict deterioration risk of hepatitis patients. Hu et al ( 10 ) used deep learning system to identify lymph node quantification and metastatic cancer.…”
Section: Introductionmentioning
confidence: 99%
“…Nonetheless, the findings of this study will be useful to improve existing predictive models in future research; in particular, multi-center data should be included and external tests should be conducted rigorously regarding the predictions. Finally, to fully determine the black-box nature of the ML model, this study followed several previous studies (22,41,42) by using the SHAP method for global interpretation and LIME for local interpretation. Although the results of both types of interpretation of the XGBoost model were consistent and credible, improved robustness could be attained by using other interpretation methods, such as Shapley Lorenz, which is a novel global interpretation method that provides a global normalized measure of explainability.…”
Section: Discussionmentioning
confidence: 99%