2021
DOI: 10.1109/access.2021.3085875
|View full text |Cite
|
Sign up to set email alerts
|

All Your Fake Detector are Belong to Us: Evaluating Adversarial Robustness of Fake-News Detectors Under Black-Box Settings

Abstract: With the hyperconnectivity and ubiquity of the Internet, the fake news problem now presents a greater threat than ever before. One promising solution for countering this threat is to leverage deep learning (DL)-based text classification methods for fake-news detection. However, since such methods have been shown to be vulnerable to adversarial attacks, the integrity and security of DL-based fake news classifiers are under question. Although many works study text classification under the adversarial threat, to … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
14
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 23 publications
(16 citation statements)
references
References 33 publications
2
14
0
Order By: Relevance
“…This suggests that STResnet-2 model is relatively more vulnerable to adversarial perturbations, which appears surprising, as we have observed that STResnet-2 model gives comparatively better performance on the unperturbed dataset, D test , as compared to other architectures. These observations hint at the possibility of an accuracy-robustness tradeoff in the crowdflow prediction models, as has been commonly observed in other DL models [5], [61].…”
Section: B D-wb-blind Adversarial Attackssupporting
confidence: 65%
See 2 more Smart Citations
“…This suggests that STResnet-2 model is relatively more vulnerable to adversarial perturbations, which appears surprising, as we have observed that STResnet-2 model gives comparatively better performance on the unperturbed dataset, D test , as compared to other architectures. These observations hint at the possibility of an accuracy-robustness tradeoff in the crowdflow prediction models, as has been commonly observed in other DL models [5], [61].…”
Section: B D-wb-blind Adversarial Attackssupporting
confidence: 65%
“…In their recent work, Jiang et al [3] present TaxiBJ dataset for the year 2021, and use a simple MLP model to benchmark their results. We choose the MLP model motivated by its recency, simplicity, and adversarial transferability-recent works have shown that compared to other architectures, adversarial inputs generated against the MLP models are comparatively general and more effectively transfer to different architectures [5]. STResnet architecture.…”
Section: A Crowd-flow State Predictionmentioning
confidence: 99%
See 1 more Smart Citation
“…Researchers have developed a wide variety of deep learning-based fake news detection systems. For instance, the work by Ni et al ( 2021 ), Ali et al ( 2021 ) and Verma et al ( 2021 ) validate the use of multiple views attention networks (MVNN), adversarial networks, and word embedding feature extraction for fake news identification respectively. These models can achieve an accuracy of over 80% for fake news detection, which makes them deployable for moderately sized systems.…”
Section: Literature Reviewmentioning
confidence: 99%
“…In this scenario, Ali et al (2021) express the idea of hyperconnectivity, which consists of the high connection of users and machines, high transmission speed, ease of communication, and access to real-time information worldwide. It is a vibrant path in technological evolution, full of opportunities as well as struggles, such as facilitating fake news.…”
Section: Fake News Detectionmentioning
confidence: 99%