2021
DOI: 10.1609/icwsm.v15i1.18088
|View full text |Cite
|
Sign up to set email alerts
|

Towards Emotion- and Time-Aware Classification of Tweets to Assist Human Moderation for Suicide Prevention

Abstract: Social media platforms are already engaged in leveraging existing online socio-technical systems to employ just-in-time interventions for suicide prevention to the public. These efforts primarily rely on self-reports of potential self-harm content that is reviewed by moderators. Most recently, platforms have employed automated models to identify self-harm content, but acknowledge that these automated models still struggle to understand the nuance of human language (e.g., sarcasm). By explicitly focusing on Twi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
4
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(4 citation statements)
references
References 46 publications
0
4
0
Order By: Relevance
“…Some researchers presented ethical concerns relating to publicly traded companies inferring or collecting sensitive health information about their users and acting on it, or sharing it, without explicit consent [47,78]. Non-transparent data collection and inference processes currently used by social media platforms were highlighted as a growing area of concern [84]. Hallucinations (the tendency for LLMs to sometimes generate false responses), were identified as having impacts for the safety and reliability of LLM applications [47,50].…”
Section: Ethical Considerationsmentioning
confidence: 99%
See 2 more Smart Citations
“…Some researchers presented ethical concerns relating to publicly traded companies inferring or collecting sensitive health information about their users and acting on it, or sharing it, without explicit consent [47,78]. Non-transparent data collection and inference processes currently used by social media platforms were highlighted as a growing area of concern [84]. Hallucinations (the tendency for LLMs to sometimes generate false responses), were identified as having impacts for the safety and reliability of LLM applications [47,50].…”
Section: Ethical Considerationsmentioning
confidence: 99%
“…Others noted that integration of LLMs into clinical care may lead to a sense of distancing of the clinician from the individual, potentially fostering feelings of invalidation or insignificance [52] and exacerbating suicidal thoughts or self-harm behaviors. False negatives (when suicidality goes undetected) and false positives (when suicidality is incorrectly flagged as being present) were noted as concerns [83,84] as psychological harm can result (e.g., resulting in missed opportunities to intervene with someone at risk, or in unnecessary mental health evaluations for someone who is not at risk) [84]. Safety was also noted as a concern as conversational AI may advance at a pace that outstrips associated safety measures [48].…”
Section: Ethical Considerationsmentioning
confidence: 99%
See 1 more Smart Citation
“…A sparse additive generative model, a topic analysis tool, was used to assess the temporal linguistic changes in tweets with and without evidence of self-harm. Furthermore, they explored temporal linguistic features of tweets with and without suicidal intent signs [ 23 ]. A transformer-based model was also proposed for suicidal ideation detection in social media that takes into consideration the temporal context [ 24 ].…”
Section: Introductionmentioning
confidence: 99%