2016
DOI: 10.1038/mp.2015.198
|View full text |Cite
|
Sign up to set email alerts
|

Testing a machine-learning algorithm to predict the persistence and severity of major depressive disorder from baseline self-reports

Abstract: Heterogeneity of major depressive disorder (MDD) illness course complicates clinical decision-making. While efforts to use symptom profiles or biomarkers to develop clinically useful prognostic subtypes have had limited success, a recent report showed that machine learning (ML) models developed from self-reports about incident episode characteristics and comorbidities among respondents with lifetime MDD in the World Health Organization World Mental Health (WMH) Surveys predicted MDD persistence, chronicity, an… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

4
126
3

Year Published

2016
2016
2024
2024

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 197 publications
(140 citation statements)
references
References 45 publications
(48 reference statements)
4
126
3
Order By: Relevance
“…Associations of these predicted values with outcomes over the intervening 10-12 years were then examined using reports obtained in the NCS-2 follow-up survey. These prospective associations were comparable to the retrospective associations found in WMH (Kessler et al, In Press). Importantly, meaningful discrimination was found both at the upper and lower ends of the predicted outcome distributions.…”
Section: Introductionsupporting
confidence: 82%
“…Associations of these predicted values with outcomes over the intervening 10-12 years were then examined using reports obtained in the NCS-2 follow-up survey. These prospective associations were comparable to the retrospective associations found in WMH (Kessler et al, In Press). Importantly, meaningful discrimination was found both at the upper and lower ends of the predicted outcome distributions.…”
Section: Introductionsupporting
confidence: 82%
“…explanatory models) and this has been lacking in over three decades of published studies (see [116] for a review). Validation on truly independent samples is challenging because patient-level data is either not suitably collected, or made available; of the existing studies similar to our proposed framework (reviewed in “Learning models of signatures” section), only two [64, 71] make use of independent samples and all rely on cross-validation for model validation and selection to mitigate against over-optimistic results due to over-fitting e.g. the bias-variance trade-off [57] and inductive bias [117].…”
Section: Discussionmentioning
confidence: 99%
“…When neuroimaging data were used as the signature, machine learning classifiers were used to predict categorical diagnosis [66], transition from an at-risk state to a dichotomised ‘psychosis versus health’ outcome [67] and dichotomised clozapine response [68, 69], or univariate aggregate predicted univariate global assessment of function (GAF) [70]. Only one study [71] used machine learning to predict multiple outcomes in major depressive disorder although again, these were dichotomised and did not model trajectories as multidimensional constructs. We argue that stratified psychiatry should avoid categorical diagnoses and univariate treatment outcomes.…”
Section: Step One: Multidimensional Definition Of Disordermentioning
confidence: 99%
“…Last, but not least, it should also be noted that these results are preliminary and require external validation to confirm whether the observed additive genetic effects and genetic correlations are specific to STAR*D or might also exist in other independent samples. Recent machine learning studies of depression (Chekroud et al, 2016; Kessler et al, 2016) underscore the need for validation because findings in a single study may be biased. Specifically, a mathematical model may fit data well in one population, but perform poorly in another.…”
Section: Discussionmentioning
confidence: 99%