Proceedings of the Eighth Workshop on Computational Linguistics and Clinical Psychology 2022
DOI: 10.18653/v1/2022.clpsych-1.16
|View full text |Cite
|
Sign up to set email alerts
|

Overview of the CLPsych 2022 Shared Task: Capturing Moments of Change in Longitudinal User Posts

Abstract: We provide an overview of the CLPsych 2022 Shared Task, which focusses on the automatic identification of 'Moments of Change' in longitudinal posts by individuals on social media and its connection with information regarding mental health . This year's task introduced the notion of longitudinal modelling of the text generated by an individual online over time, along with appropriate temporally sensitive evaluation metrics. The Shared Task consisted of two subtasks: (a) the main task of capturing changes in an … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
20
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2
1

Relationship

1
7

Authors

Journals

citations
Cited by 14 publications
(20 citation statements)
references
References 15 publications
0
20
0
Order By: Relevance
“…Here, we observe that Multitask-attn-score model gives more promising results as compared to other enlisted models on both tasks. This behaviour is reflected in the classification results on test data too (Table 3), where Multitask-attn-score has outperformed the remaining feature embeddings with the Bi-LSTM model as well as the baseline state of the art results (Tsakalidis et al, 2022a). From the model outcomes in Table 2 and 3, one could also see the impact of introducing attention layers in the Bi-LSTM model.…”
Section: Experiments and Resultsmentioning
confidence: 84%
“…Here, we observe that Multitask-attn-score model gives more promising results as compared to other enlisted models on both tasks. This behaviour is reflected in the classification results on test data too (Table 3), where Multitask-attn-score has outperformed the remaining feature embeddings with the Bi-LSTM model as well as the baseline state of the art results (Tsakalidis et al, 2022a). From the model outcomes in Table 2 and 3, one could also see the impact of introducing attention layers in the Bi-LSTM model.…”
Section: Experiments and Resultsmentioning
confidence: 84%
“…In this paper, we presented our system description for CLPsych shared task (Tsakalidis et al, 2022a). The task consists of two subtasks.…”
Section: Discussionmentioning
confidence: 99%
“…In this paper, we explain our approach to the CLPsych (Tsakalidis et al, 2022a) shared task, which consists of two subtasks, as follows: Subtask A: Subtask A tries to capture those moments when a user's mood deviates from their baseline mood based on a user's postings throughout a specific time period -this is a post-level sequential classification task. The full task description can be found in (Tsakalidis et al, 2022b).…”
Section: Introductionmentioning
confidence: 99%
“…The data used in this work are those selected for the CLPsych 2022 shared task (Tsakalidis et al, 2022a) overall suicide risk; here, we focus solely on predictions of changes in mood over time. In order to access the data, each member of this team signed a data usage agreement and an NDA due to the sensitive nature of this data.…”
Section: Datamentioning
confidence: 99%
“…Additionally, these tasks may seek to make early predictions about mental states, allowing for prompt intervention when needed (Losada et al, 2020). This work represents one such attempt as part of the 2022 CLPsych shared task (Tsakalidis et al, 2022a), 1 using a transformer-based architecture to make predictions about changes in Reddit user moods over time. We demonstrate how state-of-the-art transformer models like RoBERTa (Liu et al, 2019) provide predictions of changes in mood that are difficult to improve upon with custom features or sequential architectures.…”
Section: Introductionmentioning
confidence: 99%