2022
DOI: 10.1016/j.neucom.2022.04.020
|View full text |Cite
|
Sign up to set email alerts
|

Adversarial attack and defense technologies in natural language processing: A survey

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
8
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6
3
1

Relationship

0
10

Authors

Journals

citations
Cited by 40 publications
(8 citation statements)
references
References 76 publications
0
8
0
Order By: Relevance
“…Research on adversarial examples, whereby input data is intentionally perturbed to understand the system's resilience to human error or data tampering, has shown that even small modifications to the original text, such as swapping, inserting or deleting single character, can determine whether a sentence is interpreted as a positive or negative statement. Similar unreliability can be obtained by replacing a word with a synonym or by inserting, replacing or deleting a whole sentence (Qiu et al, 2022).…”
Section: Unreliabilitymentioning
confidence: 85%
“…Research on adversarial examples, whereby input data is intentionally perturbed to understand the system's resilience to human error or data tampering, has shown that even small modifications to the original text, such as swapping, inserting or deleting single character, can determine whether a sentence is interpreted as a positive or negative statement. Similar unreliability can be obtained by replacing a word with a synonym or by inserting, replacing or deleting a whole sentence (Qiu et al, 2022).…”
Section: Unreliabilitymentioning
confidence: 85%
“…Another approach is to detect adversarial noise that might cause wrong predictions [92], [93]. While these methods focus on images, there are other approaches that focus on generating and detecting adversarial attacks for natural language processing [94] or cybersecurity [95], which can be used to test and improve the robustness of these models.…”
Section: Testing Correctness Publicationsmentioning
confidence: 99%
“…For example, Sun et al [144] However, to increase the reliability of these models in real-world applications, especially in critical domains like medicine, it is essential to systematically study the robustness of these models in various scenarios. Adversarial robustness refers to the model's ability to maintain good performance even in the case of deliberately crafted instances [464], [465]. These instances are called adversarial instances and are carefully designed by making subtle changes in the original inputs to deceive the model.…”
Section: Robustness Of Gllmsmentioning
confidence: 99%