2023
DOI: 10.1038/s41598-023-33525-0
|View full text |Cite
|
Sign up to set email alerts
|

Detection of the chronic kidney disease using XGBoost classifier and explaining the influence of the attributes on the model using SHAP

Abstract: Chronic kidney disease (CKD) is a condition distinguished by structural and functional changes to the kidney over time. Studies show that 10% of adults worldwide are affected by some kind of CKD, resulting in 1.2 million deaths. Recently, CKD has emerged as a leading cause of mortality worldwide, making it necessary to develop a Computer-Aided Diagnostic (CAD) system to diagnose CKD automatically. Machine Learning (ML) based CAD system can be used by a clinician to automatically diagnoses mass people. Since ML… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 15 publications
(3 citation statements)
references
References 57 publications
(56 reference statements)
0
1
0
Order By: Relevance
“…The significance of every feature to machine learning model's prediction are considered by SHAP model in order to interpret model's output. [61]. By applying principles from game theory, it calculates an insight of each attribute to the model's output and offers comprehensible and straightforward explanations.…”
Section: Explainable Artificial Intelligence (Xai)mentioning
confidence: 99%
“…The significance of every feature to machine learning model's prediction are considered by SHAP model in order to interpret model's output. [61]. By applying principles from game theory, it calculates an insight of each attribute to the model's output and offers comprehensible and straightforward explanations.…”
Section: Explainable Artificial Intelligence (Xai)mentioning
confidence: 99%
“…The final stacking models of Mutual information, Pearson's correlation, Particle swarm optimization and Harris Hawks algorithm were employed for interpretation [46]. SHAP uses each feature's significance to the model's prediction to explain the machine learning model's output [47]. It computes an insight of each characteristic to the output of the model using game theory concepts and provides interpretable and clear explanations.…”
Section: Explainable Artificial Intelligence (Xai)mentioning
confidence: 99%
“…By analyzing the feature importance scores provided by XGBoost, data scientists can identify the most influential features in a dataset, gain insights into the relationships between features and the target variable, and potentially improve the model's performance by selecting or engineering more relevant features [Wang et al, 2020;Friedmann, 2001;Raihan et al, 2023;Machado et al, 2019]. The scores for each electrode obtained from the XGBoost analysis are represented in Fig.…”
Section: 21d Cnn+lstm Model With Feature Selectionmentioning
confidence: 99%