2023
DOI: 10.1145/3527174
|View full text |Cite
|
Sign up to set email alerts
|

Explanation-Driven HCI Model to Examine the Mini-Mental State for Alzheimer’s Disease

Abstract: Directing research on Alzheimer’s towards only early prediction and accuracy cannot be considered a feasible approach towards tackling a ubiquitous degenerative disease today. Applying deep learning (DL), Explainable artificial intelligence(XAI) and advancing towards the human-computer interface(HCI) model can be a leap forward in medical research. This research aims to propose a robust explainable HCI model using shapley additive explanation (SHAP), local interpretable model-agnostic explanations (LIME) and D… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
7
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
3
1

Relationship

1
7

Authors

Journals

citations
Cited by 25 publications
(12 citation statements)
references
References 40 publications
0
7
0
Order By: Relevance
“…It may be good from an academic standpoint but contributes to added opaqueness in real-time. For instance, LIME and SHAP frameworks were used jointly in one study [113]. The feature rankings derived by these individual frameworks did not correlate with each other.…”
Section: Xai Researchers Often Resort To Self-intuition To De-mentioning
confidence: 99%
See 2 more Smart Citations
“…It may be good from an academic standpoint but contributes to added opaqueness in real-time. For instance, LIME and SHAP frameworks were used jointly in one study [113]. The feature rankings derived by these individual frameworks did not correlate with each other.…”
Section: Xai Researchers Often Resort To Self-intuition To De-mentioning
confidence: 99%
“…The feature rankings derived by these individual frameworks did not correlate with each other. The Mini-Mental State Examination (MMSE) significantly contributes to SHAP, whereas normalised Whole Brain Value (nWBV) dominates the LIME features [113].…”
Section: Xai Researchers Often Resort To Self-intuition To De-mentioning
confidence: 99%
See 1 more Smart Citation
“…Gaur et al [251] utilized XAI methods including LIME and SHAP in conjunction with machine learning algorithms including Logistic Regression(80.87%), Support Vector Machine(85.8%), K-nearest Neighbour(87.24%), Multilayer Perceptron(91.94%), and Decision Tree(100%) to build a robust explainable HCI model for examining the mini-mental state for Alzheimer's disease. It is worth mentioning that the most significant features contributing to the Alzheimer's disease examing were different for the LIME-based framework and the SHAP-based framework.…”
Section: ) Xai For Cyber Security Of Human-computer Interaction (Hci)mentioning
confidence: 99%
“…Meanwhile, explainable artificial intelligence (XAI) [34] is a pragmatic tool that accelerates the creation of predictive models with domain knowledge and increases the transparency of automatically generated prediction models in the medical sector [35,36], which has assisted in providing results that are understandable to humans [37]. We noted that most of the recent research defined the detection of kidney problems in terms of individual categories, including stones [38], tumors [39], and cysts [40].…”
Section: Introductionmentioning
confidence: 99%