2023
DOI: 10.1002/alz.12948
|View full text |Cite
|
Sign up to set email alerts
|

Interpretable machine learning for dementia: A systematic review

Abstract: Introduction:Machine learning research into automated dementia diagnosis is becoming increasingly popular but so far has had limited clinical impact. A key challenge is building robust and generalizable models that generate decisions that can be reliably explained. Some models are designed to be inherently "interpretable," whereas post hoc "explainability" methods can be used for other models.Methods: Here we sought to summarize the state-of-the-art of interpretable machine learning for dementia. Results:We id… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

2
20
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 28 publications
(22 citation statements)
references
References 87 publications
(174 reference statements)
2
20
0
Order By: Relevance
“…a heatmap that does not focus clearly on clinically relevant features does not necessarily mean that the model's performance is poor. How best to analyze GradCAMs to understand model function is still an active area of research and is beyond the scope of this work [23][24][25] . Nevertheless, one might imagine that a more robustly trained model in the future may yield even better diagnostic performance.…”
Section: Discussionmentioning
confidence: 99%
“…a heatmap that does not focus clearly on clinically relevant features does not necessarily mean that the model's performance is poor. How best to analyze GradCAMs to understand model function is still an active area of research and is beyond the scope of this work [23][24][25] . Nevertheless, one might imagine that a more robustly trained model in the future may yield even better diagnostic performance.…”
Section: Discussionmentioning
confidence: 99%
“…Although clear benefits on predictive ability have been demonstrated by the AI‐enabled models in comparison with conventional models, 19,82 the application of AI incorporated into RWD to support CDM is still under early development in China. In comparison, though the global trend has seen a similar rise in the popularity of technology‐/AI‐based CDSSs, a more extensive range of therapeutic areas has been covered, including orthopedics, 83 psychiatry and mental health, 84,85 allergy and immunology, 86 anesthesiology, 87 sleep medicine, 88 etc. To further promote clinical application and impact, healthcare providers and research communities worldwide have been actively investigating ways to incorporate AI‐/technology‐based CDSSs into clinical workflows by exploring barriers and facilitators in successful implementation and formulating collaborative strategies 89–91 .…”
Section: Discussionmentioning
confidence: 99%
“…Although deep learning techniques continue to redefine the landscape of VaD research, these methods still lack transparency, and the black-box nature of deep learning models make outputs hard to interpret [65]. In medical field, building robust, generalizable, and interpretable models is necessary but challenging [66,67 ▪ ]. Moreover, maintaining patient privacy and data security while utilizing sensitive data is also paramount.…”
Section: Discussionmentioning
confidence: 99%