2022
DOI: 10.48550/arxiv.2203.05386
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Faking Fake News for Real Fake News Detection: Propaganda-loaded Training Data Generation

Abstract: While there has been a lot of research and many recent advances in neural fake news detection, defending against human-written disinformation remains underexplored. Upon analyzing current approaches for fake news generation and human-crafted articles, we found that there is a gap between them, which can explain the poor performance on detecting human-written fake news for detectors trained on automatically generated data. To address this issue, we propose a novel framework for generating articles closer to hum… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
0
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
2
2

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(4 citation statements)
references
References 21 publications
0
0
0
Order By: Relevance
“…Researchers are thus looking for more advanced generative methods that can likewise subtly apply various propaganda techniques. Huang et al [6], developed an approach that introduces loaded language and appeal-to-authority techniques into legitimate articles. Their proposed methodology assigns an 'importance score' to every sentence in the original piece based on its relevance to a generated summary.…”
Section: Synthetic Propaganda Generationmentioning
confidence: 99%
“…Researchers are thus looking for more advanced generative methods that can likewise subtly apply various propaganda techniques. Huang et al [6], developed an approach that introduces loaded language and appeal-to-authority techniques into legitimate articles. Their proposed methodology assigns an 'importance score' to every sentence in the original piece based on its relevance to a generated summary.…”
Section: Synthetic Propaganda Generationmentioning
confidence: 99%
“…Factual Consistency Enhancement While factuality has been widely explored in the field of fact-checking and fake news detection (Thorne et al, 2018;Wadden et al, 2020;Huang et al, 2022b;Shu et al, 2018;Pan et al, 2021;Huang et al, 2022a), factual inconsistency remains a major challenge for abstractive summarization. One line of work attempts to improve the faithfulness of the generated summary with a separate correction model that corrects the errors made by the summarization model Cao et al, 2020;Fabbri et al, 2022b) or directly fix factual inconsistencies in the training data (Adams et al, 2022).…”
Section: Remaining Challengesmentioning
confidence: 99%
“…Although old neural network models such as CNN and RNN are still used, pretrained transformer models have proven to be more efficient and accurate due to their improvements [17]. Transformer models started with the invention of BERT [13] [18] [19] [20] [21] [22] during 2018 [23], followed by its variations such as BART [24], ROBERTa [5] [18] [22] [25] [26], DeBERTa [18] [26] and Electra [21] [22] [27].…”
Section: Issn 2085-4552mentioning
confidence: 99%