Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) 2020
DOI: 10.18653/v1/2020.emnlp-main.198
|View full text |Cite
|
Sign up to set email alerts
|

Suicidal Risk Detection for Military Personnel

Abstract: We analyze social media for detecting the suicidal risk of military personnel, which is especially crucial for countries with compulsory military service such as the Republic of Korea. From a widely-used Korean social Q&A site, we collect posts containing military-relevant content written by active-duty military personnel. We then annotate the posts with two groups of experts: military experts and mental health experts. Our dataset includes 2,791 posts with 13,955 corresponding expert annotations of suicidal r… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
0
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
1
1
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(2 citation statements)
references
References 30 publications
0
0
0
Order By: Relevance
“…Moreover, datasets from participants with verified diagnoses are typically very limited in size (as expected), such as the dataset released by Yaneva et al (2017) with 27 texts, or Rauschenberger et al (2016) with 47 texts. Albeit smaller, the resources using data from verified participants are comparatively more diverse in terms of the languages (English, Spanish, German) and conditions (autism, dyslexia, general mental health) they cover; by contrast, the social media resources that are publicly available typically target generic mental health related to anxiety, depression or suicide ideation and focus almost predominantly on English-language sources, notable exceptions being Park et al (2020) for Korean and Lee et al (2020) for Cantonese.…”
Section: Datasets and Resourcesmentioning
confidence: 99%
“…Moreover, datasets from participants with verified diagnoses are typically very limited in size (as expected), such as the dataset released by Yaneva et al (2017) with 27 texts, or Rauschenberger et al (2016) with 47 texts. Albeit smaller, the resources using data from verified participants are comparatively more diverse in terms of the languages (English, Spanish, German) and conditions (autism, dyslexia, general mental health) they cover; by contrast, the social media resources that are publicly available typically target generic mental health related to anxiety, depression or suicide ideation and focus almost predominantly on English-language sources, notable exceptions being Park et al (2020) for Korean and Lee et al (2020) for Cantonese.…”
Section: Datasets and Resourcesmentioning
confidence: 99%
“…In realizing FedTherapist, the challenge remains to effectively capture mental health-related signals from a large corpus of spoken and typed user language on smartphones, which differs from prior NLP mental health studies based on social media (Yates et al, 2017;Park et al, 2020) -see Appendix G. To address such a challenge, we propose Context-Aware Language Learning (CALL) methodology, which integrates various temporal contexts of users (e.g., time, location) captured on smartphones to enhance the model's ability to sense mental health signals from the text data. Our evaluation of 46 participants shows that FedTherapist with CALL achieves more accurate mental health prediction than the model trained with non-text data (Wang et al, 2018), achieving 0.15 AUROC improvement and 8.21% reduction in MAE.…”
Section: Introductionmentioning
confidence: 99%