2021
DOI: 10.52876/jcs.1004847
|View full text |Cite
|
Sign up to set email alerts
|

Detection of risk factors of PCOS patients with Local Interpretable Model-agnostic Explanations (LIME) Method that an explainable artificial intelligence model

Abstract: Aim: In this study, it is aimed to extract patient-based explanations of the contribution of important features in the decision-making process (estimation) of the Random forest (RF) model, which is difficult to interpret for PCOS disease risk, with Local Interpretable Model-Agnostic Explanations (LIME). Materials and Methods: In this study, the Local Interpretable Model-Agnostic Annotations (LIME) method was applied to the “Polycystic ovary syndrome” dataset to explain the Random Forest (RF) model, which… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(4 citation statements)
references
References 26 publications
(22 reference statements)
0
4
0
Order By: Relevance
“…Mardaoui et al 13 provided an analysis based on LIME for interpreting text data. Çicȩk et al 14 provided a lucid account of application of LIME for detection of risk factors of PCOS patients. Jain et al 15 explained sentiment analysis results on social media texts through visualization.…”
Section: Previous Work On Explainablementioning
confidence: 99%
“…Mardaoui et al 13 provided an analysis based on LIME for interpreting text data. Çicȩk et al 14 provided a lucid account of application of LIME for detection of risk factors of PCOS patients. Jain et al 15 explained sentiment analysis results on social media texts through visualization.…”
Section: Previous Work On Explainablementioning
confidence: 99%
“…[9][10][11] The applicability of classification models to diagnose disease is highly dependent on the ability to interpret and explain the applied models. 12 It is vital to explain model predictions, particularly in medical imaging applications. 13,14 In recent years techniques have been developed to explain the decisions made by artificial intelligence (AI) models known as Explainable-AI (XAI).…”
Section: Introductionmentioning
confidence: 99%
“…Contrary to the first method, a different approach where there is no backpropagation algorithm and not limited to any specific model type, namely Local interpretable Model-agnostic Explanation (LIME) was developed. [8][9][10][11][12]17 LIME is a well-known and frequently used method to ensure the interpretability and explainability of the AI-based models. 12 As LIME is basically designed to be model agnostic, it can be used for various machine learning methods.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation