2019
DOI: 10.48550/arxiv.1902.07285
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Towards a Robust Deep Neural Network in Texts: A Survey

Abstract: Deep neural networks (DNNs) have achieved remarkable success in various tasks (e.g., image classification, speech recognition, and natural language processing). However, researches have shown that DNN models are vulnerable to adversarial examples, which cause incorrect predictions by adding imperceptible perturbations into normal inputs. Studies on adversarial examples in image domain have been well investigated, but in texts the research is not enough, let alone a comprehensive survey in this field. In this p… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
22
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
3

Relationship

1
7

Authors

Journals

citations
Cited by 12 publications
(22 citation statements)
references
References 115 publications
0
22
0
Order By: Relevance
“…In this method, each element in the input text is considered for substitution and the best perturbations are selected from all possible perturbations and rerun until no more perturbations are possible [15]. This attack method has been utilized in several research works [15], [43], [90] with various promising results. For example, Barham et al [43] introduced a sparse projected gradient descent (SPGD) method for crafting interpretable AEs for text applications.…”
Section: Breaching Security By Improving Attacksmentioning
confidence: 99%
“…In this method, each element in the input text is considered for substitution and the best perturbations are selected from all possible perturbations and rerun until no more perturbations are possible [15]. This attack method has been utilized in several research works [15], [43], [90] with various promising results. For example, Barham et al [43] introduced a sparse projected gradient descent (SPGD) method for crafting interpretable AEs for text applications.…”
Section: Breaching Security By Improving Attacksmentioning
confidence: 99%
“…The literature on adversarial text analysis is quite rich [49,50,51]. Want et al [49] provide an overview of the literature on adversarial attacks and corresponding defense strategies for DNN-based English and Chinese text analysis systems.…”
Section: Related Surveysmentioning
confidence: 99%
“…The literature on adversarial text analysis is quite rich [49,50,51]. Want et al [49] provide an overview of the literature on adversarial attacks and corresponding defense strategies for DNN-based English and Chinese text analysis systems. Zhang et al [50], provide a more detailed survey of adversarial attacks on deep learning-based models for NLP with a particular focus on adversarial textual examples generation methods.…”
Section: Related Surveysmentioning
confidence: 99%
See 1 more Smart Citation
“…To thwart adversarial attacks, various defense methods have been proposed to protect DNN models. In general, defense methods can be classified into two categories: detection and model enhancement [41]. For the former, defenders try to detect adversarial examples so can shield the model from them.…”
Section: Defense and Robustnessmentioning
confidence: 99%