2024
DOI: 10.21203/rs.3.rs-4180591/v1
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Applying Language Models for Suicide Prevention: Evaluating News Article Adherence to WHO Reporting Guidelines

Zohar Elyoseph,
Inbar Levkovich,
Eyal Rabin
et al.

Abstract: Background Suicide is a significant societal issue that affects many individuals annually. Previous research has indicated that irresponsible media coverage of suicides can promote suicidal behaviors, such as glorifying the individual who committed suicide or providing excessive details about the method used. Consequently, the World Health Organization (WHO) has established guidelines for responsible journalistic reporting on suicide, outlining both recommended and discouraged practices. However, these guid… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
0
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
1
1

Relationship

1
1

Authors

Journals

citations
Cited by 2 publications
(4 citation statements)
references
References 26 publications
0
0
0
Order By: Relevance
“…Studies have shown that large language models can accurately assess suicide risk, 24 adapt assessments to different cultural contexts, 35 and evaluate responsible reporting of suicide-related content. 22 Our study took this line of research further by exploring the potential of AI in professional training, addressing a critical need in mental health education, specifically in the area of suicide prevention.…”
Section: Discussionmentioning
confidence: 99%
See 3 more Smart Citations
“…Studies have shown that large language models can accurately assess suicide risk, 24 adapt assessments to different cultural contexts, 35 and evaluate responsible reporting of suicide-related content. 22 Our study took this line of research further by exploring the potential of AI in professional training, addressing a critical need in mental health education, specifically in the area of suicide prevention.…”
Section: Discussionmentioning
confidence: 99%
“…Recent research has shown that LLMs can accurately identify emotions and mental disorders, such as schizophrenia, depression, and anxiety, and provide treatment recommendations and prognoses comparable to mental health professionals. [15][16][17][18][19][20][21][22][23][24][25][26][27] Despite their potential to democratize clinical knowledge and encourage ideological pluralism, 21,28,29 ethical concerns persist. These include data privacy, algorithmic opacity, threats to patient autonomy, risks of anthropomorphism, technology access disparities, corporate concentration, deep fakes, fake news, reduced reliance on professionals, and amplification of biases.…”
Section: Ai-based Technology In Mental Healthmentioning
confidence: 99%
See 2 more Smart Citations