2021
DOI: 10.3390/app112110417
|View full text |Cite
|
Sign up to set email alerts
|

A LIME-Based Explainable Machine Learning Model for Predicting the Severity Level of COVID-19 Diagnosed Patients

Abstract: The fast and seemingly uncontrollable spread of the novel coronavirus disease (COVID-19) poses great challenges to an already overloaded health system worldwide. It thus exemplifies an urgent need for fast and effective triage. Such triage can help in the implementation of the necessary measures to prevent patient deterioration and conserve strained hospital resources. We examine two types of machine learning models, a multilayer perceptron artificial neural networks and decision trees, to predict the severity… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
9

Relationship

0
9

Authors

Journals

citations
Cited by 31 publications
(14 citation statements)
references
References 34 publications
0
9
0
Order By: Relevance
“…To derive a representation that is understandable by humans, LIME tries to find the importance of contiguous superpixels in a source image towards the output class. It has been widely implemented in COVID-19 diagnosis tasks [41], [45]- [48] to further explain the process of feature extraction, which contributes to a better understanding of what features in CT/X-ray images characterize the onset of COVID-19. Ahsan et al [45] implemented LIME to interpret top features in COVID-19 X-ray imaging and build trust in an AI framework to distinguish between patients with COVID-19 symptoms with other patients.…”
Section: Perturbation-basedmentioning
confidence: 99%
See 1 more Smart Citation
“…To derive a representation that is understandable by humans, LIME tries to find the importance of contiguous superpixels in a source image towards the output class. It has been widely implemented in COVID-19 diagnosis tasks [41], [45]- [48] to further explain the process of feature extraction, which contributes to a better understanding of what features in CT/X-ray images characterize the onset of COVID-19. Ahsan et al [45] implemented LIME to interpret top features in COVID-19 X-ray imaging and build trust in an AI framework to distinguish between patients with COVID-19 symptoms with other patients.…”
Section: Perturbation-basedmentioning
confidence: 99%
“…Feature occlusion and ablation [26]- [28], [32], [33] SHAP feature importance [35]- [41] Local interpretable model-agnostic explanations (LIME) [41], [44]- [48] Activation-based Activation maximization [49] Class activation maps (CAM) [50], [51] Gradient-based…”
Section: Perturbation-basedmentioning
confidence: 99%
“…LIME is a technique for explaining black-box models, or models whose inner logic is obscure and difficult to comprehend [47]. LIME adjusts the feature values for a single data sample and monitors its impact on the output.…”
Section: Local Interpretable Model-agnostic Explanations (Lime)mentioning
confidence: 99%
“…LIME is a popular interpretation tool that learns a new interpretable model that can better explain a less interpretable model. Numerous studies have successfully applied LIME to provide interpretation of complex models, including in biomedicine [ 195 , 196 ]. Another popular interpretation method is DeepLIFT [ 197 ], which calculates the contribution of neurons in a trained neural network by evaluating the difference in activation from a chosen representative reference.…”
Section: Major Challenges For Clinical Utility Of Complex and Data-dr...mentioning
confidence: 99%