Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations 2020
DOI: 10.18653/v1/2020.emnlp-demos.16
|View full text |Cite
|
Sign up to set email alerts
|

TextAttack: A Framework for Adversarial Attacks, Data Augmentation, and Adversarial Training in NLP

Abstract: While there has been substantial research using adversarial attacks to analyze NLP models, each attack is implemented in its own code repository. It remains challenging to develop NLP attacks and utilize them to improve model performance. This paper introduces TextAttack, a Python framework for adversarial attacks, data augmentation, and adversarial training in NLP. TextAttack builds attacks from four components: a goal function, a set of constraints, a transformation, and a search method. TextAttack's modular… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
288
0
3

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
2
2

Relationship

0
8

Authors

Journals

citations
Cited by 362 publications
(291 citation statements)
references
References 23 publications
(7 reference statements)
0
288
0
3
Order By: Relevance
“…Although we find a range of works studying the robustness 1 of DL-based text-classifiers against the adversarial attacks [7], there is very limited work in the literature that explores the adversarial ML threat for ML-based fake-news detection methodologies. To bridge this gap, we evaluate a recently proposed Hybrid CNN-RNN based fake-news detector [1], generalizable to different datasets under the adversarial setting.…”
Section: Fake Realmentioning
confidence: 99%
See 1 more Smart Citation
“…Although we find a range of works studying the robustness 1 of DL-based text-classifiers against the adversarial attacks [7], there is very limited work in the literature that explores the adversarial ML threat for ML-based fake-news detection methodologies. To bridge this gap, we evaluate a recently proposed Hybrid CNN-RNN based fake-news detector [1], generalizable to different datasets under the adversarial setting.…”
Section: Fake Realmentioning
confidence: 99%
“…Unlike the approach adopted in current models [11], which used a manual method for generating adversarial examples, we automatically generate adversarial examples using four different approaches, i.e. Text-Bugger, Text-Fooler, PWWS and Deep Word Bug, from a state-of-the-art library, Text-attack [7]. We analyze the robustness of different detector architectures with varying configurations and hyperparameters (e.g.…”
Section: Fake Realmentioning
confidence: 99%
“…To bridge this gap, we evaluate a recently proposed Hybrid CNN-RNN based fake-news detector [1], generalizable to different datasets under the adversarial setting. For this purpose, we utilize the state-of-the-art library Text-Attack 2 [7], which implements 16 different state-of-the-art attack strategies to benchmark the robustness of DNNs on several Natural Language Processing (NLP) tasks. Further, we analyze the adversarial threat surface of different detector architectures for several hyper-parameters under the black-box threat model, a threat model in which knowledge of the detector and its parameters are not assumed which makes this model more practical and adaptive [8].…”
Section: Fake Realmentioning
confidence: 99%
“…We specifically focus on the recently proposed four different attacks implemented in the Text-attack library. We choose these attacks based on their efficiency, relevance, and recency [7]. Let us assume that an input sequence, X is composed of n words, represented as {x 1 , x 2 , ..., x i , ..., x n }.…”
Section: B Adversarial Attacksmentioning
confidence: 99%
“…The four components from which TextAttack builds attacks are : a goal function, a set of constraints, a transformation, and a search method. The attacks can be reused for data augmentation and adversarial training[18].…”
mentioning
confidence: 99%