2023
DOI: 10.1016/j.cmpb.2023.107723
|View full text |Cite
|
Sign up to set email alerts
|

Explainable machine-learning algorithms to differentiate bipolar disorder from major depressive disorder using self-reported symptoms, vital signs, and blood-based markers

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
1
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(3 citation statements)
references
References 44 publications
0
3
0
Order By: Relevance
“…Moreover, most studies, like [ 21 , 25 ], employ simple ML to build predictive models for depression, and studies that employ DL, like [ 26 , 28 ], do not employ a parallel, multiple Evolutionary Algorithm-based optimisation scheme to optimise the model hyperparameters. Furthermore, most studies like [ 46 , 47 , 48 , 49 ] use explainability to develop population-level explanations of various mental health disorders, while our work produces personalised insights.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Moreover, most studies, like [ 21 , 25 ], employ simple ML to build predictive models for depression, and studies that employ DL, like [ 26 , 28 ], do not employ a parallel, multiple Evolutionary Algorithm-based optimisation scheme to optimise the model hyperparameters. Furthermore, most studies like [ 46 , 47 , 48 , 49 ] use explainability to develop population-level explanations of various mental health disorders, while our work produces personalised insights.…”
Section: Discussionmentioning
confidence: 99%
“…Studies such as [ 43 , 44 , 45 ] use explainability techniques on ML models to obtain insights into the model outputs. Moreover, recent works have begun exploring explainability in mental health settings [ 24 , 46 , 47 , 48 , 49 ]. However, the use of explainability has been limited to the extraction of the most influential model features/inputs using SHAP or LIME [ 50 ].…”
Section: Introductionmentioning
confidence: 99%
“…However, a novel method called SHapley Additive exPlanations (SHAP) [ 18 ] is known for interpreting ML models and is based on game theory. Studies have shown that SHAP tools can effectively quantify and visually elucidate nonlinear associations and complex interactions among predictors in ML models [ 16 , 17 , 19 , 20 ]. However, the applicability of these tools to sequence-dependent models such as LSTM remains debatable [ 21 ].…”
Section: Introductionmentioning
confidence: 99%