2022
DOI: 10.1145/3469659
|View full text |Cite
|
Sign up to set email alerts
|

Modeling Realistic Adversarial Attacks against Network Intrusion Detection Systems

Abstract: The incremental diffusion of machine learning algorithms in supporting cybersecurity is creating novel defensive opportunities but also new types of risks. Multiple researches have shown that machine learning methods are vulnerable to adversarial attacks that create tiny perturbations aimed at decreasing the effectiveness of detecting threats. We observe that existing literature assumes threat models that are inappropriate for realistic cybersecurity scenarios because they consider opponents with complete know… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
24
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
3
1
1

Relationship

2
7

Authors

Journals

citations
Cited by 71 publications
(44 citation statements)
references
References 83 publications
(119 reference statements)
0
24
0
Order By: Relevance
“…Although mixing data from different networks can result in more resilient ML-NIDS (cf. §II-B), relying on public datasets exposes to 'poisoning' attacks [51]. In these circumstances, training a ML-NIDS on such data would have the opposite effect of adversarial training.…”
Section: C7mentioning
confidence: 99%
“…Although mixing data from different networks can result in more resilient ML-NIDS (cf. §II-B), relying on public datasets exposes to 'poisoning' attacks [51]. In these circumstances, training a ML-NIDS on such data would have the opposite effect of adversarial training.…”
Section: C7mentioning
confidence: 99%
“…This is a pertinent aspect of the cybersecurity domain, where white-box is a highly unlikely setting. Considering that a NIDS is developed in a secure context, an attacker will commonly face a blackbox setting, or occasionally gray-box [8], [9].…”
Section: Related Workmentioning
confidence: 99%
“…In a real computer network, an example must fulfil the domain constraints of the utilized communication protocols and the class-specific constraints of each type of cyberattack. Apruzzese et al [8] proposed a taxonomy to evaluate the feasibility of an adversarial attack against a NIDS, based on access to the training data, knowledge of the model and feature set, reverse engineering and manipulation capabilities. It can provide valuable guidance to establish the concrete constraints of each level for a specific system.…”
Section: Related Workmentioning
confidence: 99%
“…Their focus is on the visual domain and they do not specifically discuss IDS or functionalitypreserving adversarial attacks. Apruzzese et al [27] examine adversarial examples and consider realistic attacks, highlighting that most literature considers adversaries with complete knowledge about the classifier and are free to interact with the target systems. They Further emphasize that few works consider 'relizable' perturbations that take account of domain and/or realworld constraints.…”
mentioning
confidence: 99%