Proceedings of the 4th Workshop on Challenges and Applications of Automated Extraction of Socio-Political Events From Text (CAS 2021
DOI: 10.18653/v1/2021.case-1.13
|View full text |Cite
|
Sign up to set email alerts
|

IIITT at CASE 2021 Task 1: Leveraging Pretrained Language Models for Multilingual Protest Detection

Abstract: In a world abounding in constant protests resulting from events like a global pandemic, climate change, religious or political conflicts, there has always been a need to detect events/protests before getting amplified by news media or social media. This paper demonstrates our work on the sentence classification subtask of multilingual protest detection in CASE@ACL-IJCNLP 2021. We approached this task by employing various multilingual pre-trained transformer models to classify if any sentence contains informati… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
2
1

Relationship

1
2

Authors

Journals

citations
Cited by 3 publications
(1 citation statement)
references
References 25 publications
0
1
0
Order By: Relevance
“…So, the model was fine-tuned for three epochs while maintaining a low learning rate of around 3e-5 to get a BLEU score of 24.2. (Popović, 2015;Snover et al, 2006) Training a model to predict for a low-resourced language was highly challenging due to the absence of prominent pretrained models (Kalyan et al, 2021;Yasaswini et al, 2021;Hande et al, 2021b). However, as an experiment, two models from HuggingFace Transformers 11 , M2M100 (Fan et al, 2020) and Opus-MT from Helsinki NLP (Tiedemann, 2020) were compared.…”
Section: Results and Analysismentioning
confidence: 99%
“…So, the model was fine-tuned for three epochs while maintaining a low learning rate of around 3e-5 to get a BLEU score of 24.2. (Popović, 2015;Snover et al, 2006) Training a model to predict for a low-resourced language was highly challenging due to the absence of prominent pretrained models (Kalyan et al, 2021;Yasaswini et al, 2021;Hande et al, 2021b). However, as an experiment, two models from HuggingFace Transformers 11 , M2M100 (Fan et al, 2020) and Opus-MT from Helsinki NLP (Tiedemann, 2020) were compared.…”
Section: Results and Analysismentioning
confidence: 99%