Objective: Thus far, most applications in precision mental health have not been evaluated prospectively. This article presents the results of a prospective randomized-controlled trial investigating the effects of a digital decision support and feedback system, which includes two components of patient-specific recommendations: (a) a clinical strategy recommendation and (b) adaptive recommendations for patients at risk for treatment failure. Method: Therapist-patient dyads (N = 538) in a cognitive behavioral therapy outpatient clinic were randomized to either having access to a decision support system (intervention group; n = 335) or not (treatment as usual; n = 203). First, treatment strategy recommendations (problem-solving, motivationoriented, or a mix of both strategies) for the first 10 sessions were evaluated. Second, the effect of psychometric feedback enhanced with clinical problem-solving tools on treatment outcome was investigated. Results: The prospective evaluation showed a differential effect size of about 0.3 when therapists followed the recommended treatment strategy in the first 10 sessions. Moreover, the linear mixed models revealed therapist symptom awareness and therapist attitude and confidence as significant predictors of an outcome as well as therapist-rated usefulness of feedback as a significant moderator of the feedback-outcome and the not on track-outcome associations. However, no main effects were found for feedback. Conclusions: The results demonstrate the importance of prospective studies and the highquality implementation of digital decision support tools in clinical practice. Therapists seem to be able to learn from such systems and incorporate them into their clinical practice to enhance patient outcomes, but
Background
About 30% of patients drop out of cognitive–behavioural therapy (CBT), which has implications for psychiatric and psychological treatment. Findings concerning drop out remain heterogeneous.
Aims
This paper aims to compare different machine-learning algorithms using nested cross-validation, evaluate their benefit in naturalistic settings, and identify the best model as well as the most important variables.
Method
The data-set consisted of 2543 out-patients treated with CBT. Assessment took place before session one. Twenty-one algorithms and ensembles were compared. Two parameters (Brier score, area under the curve (AUC)) were used for evaluation.
Results
The best model was an ensemble that used Random Forest and nearest-neighbour modelling. During the training process, it was significantly better than generalised linear modelling (GLM) (Brier score: d = –2.93, 95% CI (−3.95, −1.90)); AUC: d = 0.59, 95% CI (0.11 to 1.06)). In the holdout sample, the ensemble was able to correctly identify 63.4% of cases of patients, whereas the GLM only identified 46.2% correctly. The most important predictors were lower education, lower scores on the Personality Style and Disorder Inventory (PSSI) compulsive scale, younger age, higher scores on the PSSI negativistic and PSSI antisocial scale as well as on the Brief Symptom Inventory (BSI) additional scale (mean of the four additional items) and BSI overall scale.
Conclusions
Machine learning improves drop-out predictions. However, not all algorithms are suited to naturalistic data-sets and binary events. Tree-based and boosted algorithms including a variable selection process seem well-suited, whereas more advanced algorithms such as neural networks do not.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.