2023
DOI: 10.1177/10731911231167490
|View full text |Cite
|
Sign up to set email alerts
|

Predicting Lifetime Suicide Attempts in a Community Sample of Adolescents Using Machine Learning Algorithms

Abstract: Suicide is a major global health concern and a prominent cause of death in adolescents. Previous research on suicide prediction has mainly focused on clinical or adult samples. To prevent suicides at an early stage, however, it is important to screen for risk factors in a community sample of adolescents. We compared the accuracy of logistic regressions, elastic net regressions, and gradient boosting machines in predicting suicide attempts by 17-year-olds in the Millennium Cohort Study ( N = 7,347), combining a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2
2
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(3 citation statements)
references
References 75 publications
0
3
0
Order By: Relevance
“…In the current study, it was possible to correctly classify 67% (Sample 1) and 75% (Sample 2) of all patients into either therapy dropouts or completers using machine learning prediction models including only baseline indicators of large naturalistic inpatient samples. As it is common for classification tasks with highly unequal group sizes (e.g., Belsher et al, 2019;Jankowsky et al, 2023), we found that members of the majority group, that is therapy completers, could be predicted more accurately. Compared to Bennemann et al (2022) who predicted therapy dropout in a German outpatient sample, AUC were higher in our analyses (.74/.83 vs. .66) which might be attributed to several reasons such as the different dropout rations, predictor variables, length of therapies, or settings (inpatient vs. outpatients).…”
Section: Discussionmentioning
confidence: 60%
“…In the current study, it was possible to correctly classify 67% (Sample 1) and 75% (Sample 2) of all patients into either therapy dropouts or completers using machine learning prediction models including only baseline indicators of large naturalistic inpatient samples. As it is common for classification tasks with highly unequal group sizes (e.g., Belsher et al, 2019;Jankowsky et al, 2023), we found that members of the majority group, that is therapy completers, could be predicted more accurately. Compared to Bennemann et al (2022) who predicted therapy dropout in a German outpatient sample, AUC were higher in our analyses (.74/.83 vs. .66) which might be attributed to several reasons such as the different dropout rations, predictor variables, length of therapies, or settings (inpatient vs. outpatients).…”
Section: Discussionmentioning
confidence: 60%
“…As a final robustness check, we reran all our analyses with a random training-test split of 90–10 and repeated the process 1000 times. Except for minor details (90–10 instead of 80–20 and 1000 repetitions instead of 100), we used the protocol that Jankowsky et al used in a recent study 38 . Differences in the amount of variance explained between the model based on Jankowsky et al’s protocol and our original model reported in the manuscript ranged from 0.00 to 0.01 for all research questions.…”
Section: Resultsmentioning
confidence: 99%
“…Again, these weights indicate that gaze explained the most variance, but personality facets provided complementary information about participants' intelligence test scores.As a final robustness check, we reran all our analyses with a random training-test split of 90-10 and repeated the process 1,000 times. Except for minor details (90-10 instead of 80-20 and 1,000 repetitions instead of 100), we used the protocol thatJankowsky et al (2023) used in a recent study. Differences in the amount of variance explained between the model based on Jankowsky et al's protocol and our original model reported in the manuscript ranged from 0.00 to 0.01 for all research questions.…”
mentioning
confidence: 99%