2022
DOI: 10.3233/jcs-210094
|View full text |Cite
|
Sign up to set email alerts
|

Adversarial examples for network intrusion detection systems

Abstract: Machine learning-based network intrusion detection systems have demonstrated state-of-the-art accuracy in flagging malicious traffic. However, machine learning has been shown to be vulnerable to adversarial examples, particularly in domains such as image recognition. In many threat models, the adversary exploits the unconstrained nature of images–the adversary is free to select some arbitrary amount of pixels to perturb. However, it is not clear how these attacks translate to domains such as network intrusion … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 12 publications
(2 citation statements)
references
References 30 publications
0
2
0
Order By: Relevance
“…In future work, we intend to generate more realistic adversarial attacks that project more easily into the problem space. To do so, we will follow some recommendations found in the literature, [70,65,44], namely i) restrict the space of features to be perturbed, i.e., avoid perturbing non-differentiable features so that the transformation is reversible, and the features directly related to the functionality of the flow so as not to impact it, ii) perform small amplitude perturbations and check that the values of the modified features remain valid (domain constraints), and iii) analyze the consistency of the values taken by the correlated features.…”
Section: Discussionmentioning
confidence: 99%
“…In future work, we intend to generate more realistic adversarial attacks that project more easily into the problem space. To do so, we will follow some recommendations found in the literature, [70,65,44], namely i) restrict the space of features to be perturbed, i.e., avoid perturbing non-differentiable features so that the transformation is reversible, and the features directly related to the functionality of the flow so as not to impact it, ii) perform small amplitude perturbations and check that the values of the modified features remain valid (domain constraints), and iii) analyze the consistency of the values taken by the correlated features.…”
Section: Discussionmentioning
confidence: 99%
“…All the factors mentioned above raise another question: "Are constraint domains less susceptible to adversaries' generation ?". Sheatsley [50] empirically tested the mentioned hypothesis and experimented with the same over the constraints dataset. The author showed that the misclassification rate is more than 95% with adaptive JSMA and Histogram Sketch Generation (HSG) algorithm when applied to intrusion detection dataset.…”
Section: Unrestricted and Restricted Domain Studymentioning
confidence: 99%