2022
DOI: 10.1109/access.2022.3146405
|View full text |Cite
|
Sign up to set email alerts
|

Adversarial Machine Learning in Text Processing: A Literature Survey

Abstract: Machine learning algorithms represent the intelligence that controls many information systems and applications around us. As such, they are targeted by attackers to impact their decisions. Text created by machine learning algorithms has many types of applications, some of which can be considered malicious especially if there is an intention to present machine-generated text as human-generated. In this paper, we surveyed major subjects in adversarial machine learning for text processing applications. Unlike adv… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
1
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 8 publications
(2 citation statements)
references
References 89 publications
0
1
0
Order By: Relevance
“…The goal of adversarial training is to improve the model's ability to distinguish between real and generated data, thereby making it more resistant to GANbased attacks. Defense-GANs schemes [89] are a special type of GAN that are designed specifically to defend against GAN-based attacks. They can be trained to recognize and reject generated data, making them an effective countermeasure against GAN-based attacks.…”
Section: ) Gan-based Attacksmentioning
confidence: 99%
“…The goal of adversarial training is to improve the model's ability to distinguish between real and generated data, thereby making it more resistant to GANbased attacks. Defense-GANs schemes [89] are a special type of GAN that are designed specifically to defend against GAN-based attacks. They can be trained to recognize and reject generated data, making them an effective countermeasure against GAN-based attacks.…”
Section: ) Gan-based Attacksmentioning
confidence: 99%
“…The evaluation of explainability of DNN models is known to be a challenging task, necessitating such an effort. From another perspective, while there have been many surveys of literature on adversarial attacks and robustness [7,8,11,25,29,35,46,51,57,61,65,69,75,77,101,104,112,113,116,118,119,121,122,129,135] -which focus on attacking the predictive outcome of these models, there have been no effort so far to study and consolidate existing efforts on attacks on explainability of DNN models. Many recent efforts have demonstrated the vulnerability of explanations (or attributions 1 ) to human-imperceptible input perturbations across image, text and tabular data [36,45,55,62,107,108,133].…”
Section: Introductionmentioning
confidence: 99%