Proceedings of the 4th Workshop on Challenges and Applications of Automated Extraction of Socio-Political Events From Text (CAS 2021
DOI: 10.18653/v1/2021.case-1.18
|View full text |Cite
|
Sign up to set email alerts
|

IBM MNLP IE at CASE 2021 Task 1: Multigranular and Multilingual Event Detection on Protest News

Abstract: In this paper, we present the event detection models and systems we have developed for Multilingual Protest News Detection -Shared Task 1 at CASE 2021. 1 The shared task has 4 subtasks which cover event detection at different granularity levels (from document level to token level) and across multiple languages (English, Hindi, Portuguese and Spanish). To handle data from multiple languages, we use a multilingual transformer-based language model (XLM-R) as the input text encoder. We apply a variety of techniqu… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(2 citation statements)
references
References 24 publications
0
2
0
Order By: Relevance
“…Following them, various improved architectures such as LSTM-Attention, Convolutional Recurrent Neural Network (CRNN) and CNN-Attention were proposed for sentencelevel event detection (Liu et al, 2019a;Huynh et al, 2016). Recently with the success of transformers such as BERT (Devlin et al, 2019), RoBERTa (Liu et al, 2019b), and XLM-RoBERTa (XLM-R) (Conneau et al, 2020), state-of-the-art sentence-level event detection models are based on transformers (Hu and Stoehr, 2021;Awasthy et al, 2021;Hettiarachchi et al, 2023a) which we also use in this research.…”
Section: Event Detectionmentioning
confidence: 99%
See 1 more Smart Citation
“…Following them, various improved architectures such as LSTM-Attention, Convolutional Recurrent Neural Network (CRNN) and CNN-Attention were proposed for sentencelevel event detection (Liu et al, 2019a;Huynh et al, 2016). Recently with the success of transformers such as BERT (Devlin et al, 2019), RoBERTa (Liu et al, 2019b), and XLM-RoBERTa (XLM-R) (Conneau et al, 2020), state-of-the-art sentence-level event detection models are based on transformers (Hu and Stoehr, 2021;Awasthy et al, 2021;Hettiarachchi et al, 2023a) which we also use in this research.…”
Section: Event Detectionmentioning
confidence: 99%
“…Later, neural network architectures such as Bidirectional LSTM (Bi-LSTM), Dynamic Multi-pooling CNNs (DMCNNs), Bi-LSTM-DMCNN and multi-attention were proposed for word-level event detection (Nguyen et al, 2016;Feng et al, 2016;Chen et al, 2015;Balali et al, 2020;Ding and Li, 2018). Very recently, similar to the sentence-level, different pre-trained transformers such as BERT and XLM-R were used at word-level Huang and Ji, 2020;Awasthy et al, 2021;Hettiarachchi et al, 2023a), setting the state-of-the-art performance (Hürriyetoglu et al, 2021a(Hürriyetoglu et al, , 2022. In summary, previous research built separate models for sentence and word-level event detection.…”
Section: Event Detectionmentioning
confidence: 99%