Findings of the Association for Computational Linguistics: EACL 2023 2023
DOI: 10.18653/v1/2023.findings-eacl.184
|View full text |Cite
|
Sign up to set email alerts
|

Combining Psychological Theory with Language Models for Suicide Risk Detection

Daniel Izmaylov,
Avi Segal,
Kobi Gal
et al.

Abstract: Recent years saw a dramatic increase in the popularity of online counseling services providing emergency mental health support. This paper provides a new language model for automatic detection of suicide risk in online chat sessions between help-seekers and counselors. The model adapts a hierarchical BERT language model for this task. It extends the state of the art in capturing aspects of the conversation structure in the counseling session and in integrating psychological theory into the model. We test the p… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(2 citation statements)
references
References 33 publications
0
2
0
Order By: Relevance
“…We randomly split the labeled Sahar dataset into three sets: train (70%), validate (15%), and test (15%). These datasets were used throughout experiments described in detail in Bialer et al (2022) and Izmaylov et al (2023). To evaluate the model performance, we used ROC-AUC, widely employed in suicide detection research (Bernert et al, 2020).…”
Section: Empirical Methodologymentioning
confidence: 99%
See 1 more Smart Citation
“…We randomly split the labeled Sahar dataset into three sets: train (70%), validate (15%), and test (15%). These datasets were used throughout experiments described in detail in Bialer et al (2022) and Izmaylov et al (2023). To evaluate the model performance, we used ROC-AUC, widely employed in suicide detection research (Bernert et al, 2020).…”
Section: Empirical Methodologymentioning
confidence: 99%
“…We incorporated the SRF lexicon as a pretraining task. As described in a previous study (Izmaylov et al, 2023), we first chose a 5-dimension representation that outperformed the 20-dimension representation on the validation set, leading us to use this representation in the subsequent pre-training phase. In the second step, the self-supervised knowledge task (SSK task) was applied as a new pretraining task for predicting Sahar conversations in the SRF representation space.…”
Section: The Language Modelmentioning
confidence: 99%