2019
DOI: 10.48550/arxiv.1912.09855
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Explainability and Adversarial Robustness for RNNs

Alexander Hartl,
Maximilian Bachl,
Joachim Fabini
et al.

Abstract: Recurrent Neural Networks (RNNs) yield attractive properties for constructing Intrusion Detection Systems (IDSs) for network data. With the rise of ubiquitous Machine Learning (ML) systems, malicious actors have been catching up quickly to find new ways to exploit ML vulnerabilities for profit. Recently developed adversarial ML techniques focus on computer vision and their applicability to network traffic is not straightforward: Network packets expose fewer features than an image, are sequential and impose sev… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2020
2020
2020
2020

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(3 citation statements)
references
References 15 publications
0
3
0
Order By: Relevance
“…According to them, gradient similarity shows the influence of training data on test samples, and behaves differently for genuine and adversarial input samples, enabling the detection of various adversarial attacks with high accuracy. Some other interesting works are relying on explainable ML techniques to guard against adversarial attacks [25], [218], [220], [221]. However, the explanations/information regarding the working mechanism of ML algorithms revealed by explainability methods could also be utilized to generate more effective adversarial attacks on the algorithms [57].…”
Section: Agriculturementioning
confidence: 99%
“…According to them, gradient similarity shows the influence of training data on test samples, and behaves differently for genuine and adversarial input samples, enabling the detection of various adversarial attacks with high accuracy. Some other interesting works are relying on explainable ML techniques to guard against adversarial attacks [25], [218], [220], [221]. However, the explanations/information regarding the working mechanism of ML algorithms revealed by explainability methods could also be utilized to generate more effective adversarial attacks on the algorithms [57].…”
Section: Agriculturementioning
confidence: 99%
“…Papernot et al [16] have proposed the adversarial distance, which measures distances between different labels using gradient information to indicate the risk of misclassification between classes. Meanwhile, in the NIDS domain, Hartl et al [8] have developed the Adversarial Risk Score (ARS), which is a distance-based robustness score for classifiers against adversarial examples. In their work, they use a Recurrent Neural Network (RNN) for classification, and investigate the feature sensitivity of their classifier.…”
Section: Related Workmentioning
confidence: 99%
“…due to complementary security measures). Moreover, weights can be assigned empirically using a feature sensitivity analysis [8].…”
Section: Feature Analysis and Groupingmentioning
confidence: 99%