2018
DOI: 10.48550/arxiv.1804.07998
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Generating Natural Language Adversarial Examples

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

3
174
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
3
2
2

Relationship

0
7

Authors

Journals

citations
Cited by 107 publications
(182 citation statements)
references
References 10 publications
3
174
0
Order By: Relevance
“…Numerous research studies have extensively studied the role of adversarial attacks in developing robust NLP models [35], [39], [54], [58]. For example, Cheng et al [54] study crafting AEs for seq2seq models whose inputs are discrete text strings.…”
Section: Breaching Security By Improving Attacksmentioning
confidence: 99%
See 3 more Smart Citations
“…Numerous research studies have extensively studied the role of adversarial attacks in developing robust NLP models [35], [39], [54], [58]. For example, Cheng et al [54] study crafting AEs for seq2seq models whose inputs are discrete text strings.…”
Section: Breaching Security By Improving Attacksmentioning
confidence: 99%
“…On the other hand, a black-box attack is a type of adversarial attack where an adversary does not have access to the model's internal structure nor parameters. This attack technique has been used in numerous research works [12], [39], [53], [83]- [89].…”
Section: Breaching Security By Improving Attacksmentioning
confidence: 99%
See 2 more Smart Citations
“…Prior work has shown that neural networks are vulnerable to various types of (adversarial) perturbations, such as small l-norm bounded perturbations [39], geometric transformations [13,22], and word substitutions [2]. Such perturbations can often cause a misclassification for any given input, which may have serious consequences, especially in safety critical systems.…”
Section: Introductionmentioning
confidence: 99%