2021
DOI: 10.1007/978-3-030-73696-5_9
|View full text |Cite
|
Sign up to set email alerts
|

Transformer-Based Language Model Fine-Tuning Methods for COVID-19 Fake News Detection

Abstract: With the pandemic of COVID-19, relevant fake news is spreading all over the sky throughout the social media. Believing in them without discrimination can cause great trouble to people's life. However, universal language models may perform weakly in these fake news detection for lack of large-scale annotated data and sufficient semantic understanding of domain-specific knowledge. While the model trained on corresponding corpora is also mediocre for insufficient learning. In this paper, we propose a novel transf… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
14
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
4
1

Relationship

0
10

Authors

Journals

citations
Cited by 29 publications
(14 citation statements)
references
References 8 publications
0
14
0
Order By: Relevance
“…To quantify our datasets' contributions, we designed experiment setups wherein we trained RoBERTa base (Liu et al, 2019) for paraphrase detection on a combination of TwitterPPDB and our datasets as training data. RoBERTa was chosen for its generality, as it is a commonly used model in current NLP work and benchmarking, and currently achieves SOTA or near-SOTA results on a majority of NLP benchmark tasks (Wang et al, 2019(Wang et al, , 2020; Chen et al, 2021).…”
Section: Methodsmentioning
confidence: 99%
“…To quantify our datasets' contributions, we designed experiment setups wherein we trained RoBERTa base (Liu et al, 2019) for paraphrase detection on a combination of TwitterPPDB and our datasets as training data. RoBERTa was chosen for its generality, as it is a commonly used model in current NLP work and benchmarking, and currently achieves SOTA or near-SOTA results on a majority of NLP benchmark tasks (Wang et al, 2019(Wang et al, , 2020; Chen et al, 2021).…”
Section: Methodsmentioning
confidence: 99%
“…Within various studies and research, apart from tokenisation and stopword removal, authors have performed removal of HTTP URLs special characters [3][4] [5]. In the study [6], the authors, in addition to the traditional preprocessing techniques, data augmentation using the back translation technique to increase the existing data is performed.…”
Section: A Textual Data Preprocessingmentioning
confidence: 99%
“…We used three transformer models: m-BERT, Bangla-BERT, and XLM-R on BEmoC. In recent years transformer is being used extensively for classification tasks to achieve stateof-the-art results (Chen et al, 2021). The models are culled from the Huggingface 2 transformers library and fine-tuned on the emotion corpus by using Ktrain(Maiya, 2020) package.…”
Section: Transformer Modelsmentioning
confidence: 99%