2022
DOI: 10.3389/fpubh.2022.874455
|View full text |Cite
|
Sign up to set email alerts
|

An Explainable AI Approach for the Rapid Diagnosis of COVID-19 Using Ensemble Learning Algorithms

Abstract: BackgroundArtificial intelligence-based disease prediction models have a greater potential to screen COVID-19 patients than conventional methods. However, their application has been restricted because of their underlying black-box nature.ObjectiveTo addressed this issue, an explainable artificial intelligence (XAI) approach was developed to screen patients for COVID-19.MethodsA retrospective study consisting of 1,737 participants (759 COVID-19 patients and 978 controls) admitted to San Raphael Hospital (OSR) f… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
7
2

Relationship

1
8

Authors

Journals

citations
Cited by 15 publications
(9 citation statements)
references
References 48 publications
(43 reference statements)
0
5
0
Order By: Relevance
“…The most critical markers were basophils, eosinophils, red cell distribution width and leukocytes. An XAI approach was used to diagnose COVID-19 in another research [ 71 ]. Four ensemble models were explained using LIME.…”
Section: Resultsmentioning
confidence: 99%
“…The most critical markers were basophils, eosinophils, red cell distribution width and leukocytes. An XAI approach was used to diagnose COVID-19 in another research [ 71 ]. Four ensemble models were explained using LIME.…”
Section: Resultsmentioning
confidence: 99%
“…A network map can be shown visually by using scientific mapping techniques that examine text data such as keywords collected from titles and abstracts [52]. Each term in this map is represented as a node, and the connections between them are shown as edges between the nodes.…”
Section: Most Relevant Countriesmentioning
confidence: 99%
“…All these keywords were then grouped into nine clusters based on their frequent co-occurrence, helping researchers identify important themes in the research, as seen in Figure 11. A network map can be shown visually by using scientific mapping techniques that examine text data such as keywords collected from titles and abstracts [52]. Each term in this map is represented as a node, and the connections between them are shown as edges between the nodes.…”
Section: Most Relevant Countriesmentioning
confidence: 99%
“…Explainable AI (XAI) approach is used to convert this black box to a glass box system that helps understand the AI system's prediction report [36]. Explainability is the degree to which humans can understand the AI decision [37], provide insights into the AI system, and discuss the logic behind the decision. Application of XAI has three main benefits: (a) provides a transparent interpretation and boosts trust in the designed model; (b) enables model troubleshooting; (c) specifies the source of the system basis.…”
Section: Xaimentioning
confidence: 99%