Proceedings of the Sixth Workshop on Noisy User-Generated Text (W-Nut 2020) 2020
DOI: 10.18653/v1/2020.wnut-1.66
|View full text |Cite
|
Sign up to set email alerts
|

NIT_COVID-19 at WNUT-2020 Task 2: Deep Learning Model RoBERTa for Identify Informative COVID-19 English Tweets

Abstract: This paper presents the model submitted by NIT COVID-19 team for identified informative COVID-19 English tweets at WNUT-2020 Task2. This shared task addresses the problem of automatically identifying whether an English tweet related to informative (novel coronavirus) or not. These informative tweets provide information about recovered, confirmed, suspected, and death cases as well as location or travel history of the cases. The proposed approach includes pre-processing techniques and pre-trained RoBERTa with s… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(2 citation statements)
references
References 20 publications
0
2
0
Order By: Relevance
“…We used RoBERTa [29] and XLNet [30] pre-trained networks. These models use Transformers [39,40] to capture bidirectional relationships and are suitable for flood-event classification. These models were selected based on their performance on similar tasks [40,41].…”
Section: Language Model Combination For Textual Datamentioning
confidence: 99%
See 1 more Smart Citation
“…We used RoBERTa [29] and XLNet [30] pre-trained networks. These models use Transformers [39,40] to capture bidirectional relationships and are suitable for flood-event classification. These models were selected based on their performance on similar tasks [40,41].…”
Section: Language Model Combination For Textual Datamentioning
confidence: 99%
“…These models use Transformers [39,40] to capture bidirectional relationships and are suitable for flood-event classification. These models were selected based on their performance on similar tasks [40,41]. The implementation of both networks is performed with the help of the python-based Fastai [42] library.…”
Section: Language Model Combination For Textual Datamentioning
confidence: 99%