2019
DOI: 10.48550/arxiv.1901.09963
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Defense Methods Against Adversarial Examples for Recurrent Neural Networks

Ishai Rosenberg,
Asaf Shabtai,
Yuval Elovici
et al.

Abstract: Adversarial examples are known to mislead deep learning models to incorrectly classify them, even in domains where such models achieve state-of-the-art performance. Until recently, research on both attack and defense methods focused on image recognition, primarily using convolutional neural networks (CNNs). In recent years, adversarial example generation methods for recurrent neural networks (RNNs) have been published, demonstrating that RNN classifiers are also vulnerable to such attacks. In this paper, we pr… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
22
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 12 publications
(22 citation statements)
references
References 24 publications
0
22
0
Order By: Relevance
“…The topic is relatively less studied in the text domain [29]. Other than detection using a spell-checker, to the best of our knowledge, the only approach is the one mentioned in [34], which focuses on re-training for improving robustness rather than detecting adversarial texts. In our work, we propose a detection method which is inspired by differential testing [49].…”
Section: Adversarial Text Detectionmentioning
confidence: 99%
See 4 more Smart Citations
“…The topic is relatively less studied in the text domain [29]. Other than detection using a spell-checker, to the best of our knowledge, the only approach is the one mentioned in [34], which focuses on re-training for improving robustness rather than detecting adversarial texts. In our work, we propose a detection method which is inspired by differential testing [49].…”
Section: Adversarial Text Detectionmentioning
confidence: 99%
“…To the best of our knowledge, there are no existing methods or tools which are available for detecting adversarial texts. Note that the tool mentioned in [34] is not available.…”
Section: Rq1: Is Kl Divergence Useful In Detecting Adversarial Samples?mentioning
confidence: 99%
See 3 more Smart Citations