Proceedings of the 11th International Conference on Agents and Artificial Intelligence 2019
DOI: 10.5220/0007566307940800
|View full text |Cite
|
Sign up to set email alerts
|

Fake News Detection via NLP is Vulnerable to Adversarial Attacks

Abstract: News plays a significant role in shaping people's beliefs and opinions. Fake news has always been a problem, which wasn't exposed to the mass public until the past election cycle for the 45th President of the United States. While quite a few detection methods have been proposed to combat fake news since 2015, they focus mainly on linguistic aspects of an article without any fact checking. In this paper, we argue that these models have the potential to misclassify fact-tampering fake news as well as under-writt… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
31
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
2
2

Relationship

0
9

Authors

Journals

citations
Cited by 66 publications
(31 citation statements)
references
References 24 publications
0
31
0
Order By: Relevance
“…Based on the types of machine learning models and web tools which have already been developed, it appears difficult to create a robust, entirely feature-based NLP model which includes no external information. Even seemingly performant natural language processing models have shown significantly reduced accuracy when presented with reconfigured news (changing small amounts of information to make the information false) as part of an adversarial attack [28] . Therefore, much of the prior work seems to be on seemingly peripheral tasks, such as stance detection, neural “fake news” detection, bot detection, and multi-step approaches involving the inclusion of external information.…”
Section: Introductionmentioning
confidence: 99%
“…Based on the types of machine learning models and web tools which have already been developed, it appears difficult to create a robust, entirely feature-based NLP model which includes no external information. Even seemingly performant natural language processing models have shown significantly reduced accuracy when presented with reconfigured news (changing small amounts of information to make the information false) as part of an adversarial attack [28] . Therefore, much of the prior work seems to be on seemingly peripheral tasks, such as stance detection, neural “fake news” detection, bot detection, and multi-step approaches involving the inclusion of external information.…”
Section: Introductionmentioning
confidence: 99%
“…Attacks on natural language learning. Zhou et al argue that the use of Natural Language Processing to identify fake news is vulnerable to attacks on the machine learning itself [86]. Zhou et al identify three attacks: the distortion of facts, the exchange between subject and object, and the confusion of causes.…”
mentioning
confidence: 99%
“…The exchange between subject and object aims to confuse the reader between those who practice and those who suffer the reported action. The attack of confusion of cause consists of creating non-existent causal relations between two independent events or cutting parts of a story, leaving only the parts that the attacker wishes to present to the reader [86].…”
mentioning
confidence: 99%
“…Although a recent work [11] evaluates fake-news detector under the adversarial threat, our work differs in a number of ways. Unlike the approach adopted in current models [11], which used a manual method for generating adversarial examples, we automatically generate adversarial examples using four different approaches, i.e. Text-Bugger, Text-Fooler, PWWS and Deep Word Bug, from a state-of-the-art library, Text-attack [7].…”
Section: Fake Realmentioning
confidence: 99%