2021
DOI: 10.1007/978-981-16-8059-5_20
|View full text |Cite
|
Sign up to set email alerts
|

Detect & Reject for Transferability of Black-Box Adversarial Attacks Against Network Intrusion Detection Systems

Abstract: In the last decade, the use of Machine Learning techniques in anomaly-based intrusion detection systems has seen much success. However, recent studies have shown that Machine learning in general and deep learning specifically are vulnerable to adversarial attacks where the attacker attempts to fool models by supplying deceptive input. Research in computer vision, where this vulnerability was first discovered, has shown that adversarial images designed to fool a specific model can deceive other machine learning… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
2

Relationship

1
5

Authors

Journals

citations
Cited by 9 publications
(5 citation statements)
references
References 12 publications
(14 reference statements)
0
4
0
Order By: Relevance
“…This method proposes that instead of trying to classify the adversarial instance correctly despite the attack, the adversarial instance is detected and dealt with accordingly. Most often, this action consists in rejecting the detected sample and not letting it pass through the system [24,25,26] Many detection methods in the area of adversarial learning have been proposed. In [25], the authors used the neural activation pattern to detect adversarial perturbations in the image classification domain.…”
Section: Anomaly Detectionmentioning
confidence: 99%
“…This method proposes that instead of trying to classify the adversarial instance correctly despite the attack, the adversarial instance is detected and dealt with accordingly. Most often, this action consists in rejecting the detected sample and not letting it pass through the system [24,25,26] Many detection methods in the area of adversarial learning have been proposed. In [25], the authors used the neural activation pattern to detect adversarial perturbations in the image classification domain.…”
Section: Anomaly Detectionmentioning
confidence: 99%
“…This property was first explored by Goodfellow et al [14], who show that adversarial examples that fool one model can fool other models with a high probability without the need to have the same architecture or be trained on the same dataset. An attacker can exploit the transferability property to launch black-box attacks by creating a surrogate model trained on data following the same distribution as the target model [27]. This can be easily done by sniffing network traffic [28], especially in a botnet scenario where the attacker already has a foothold in the corporate network.…”
Section: Related Workmentioning
confidence: 99%
“…Instead of trying to correctly classify the adversarial instance into its original class, this strategy recommends detecting the adversarial instance and treating it accordingly. The most common action is to reject the identified adversarial sample and prevent it from passing through the system [45,46,27]. Our proposed defense falls into the category of anomaly detection.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…In Detect & Reject [61] the authors first studied the impact of the transferability of the adversarial attacks on the classifiers and then the performance of ensemble IDS using SVM, DT, Logistic Regression (LR),, RF, and Linear Discriminant Analysis (LDA) using the majority voting rule. FGSM and PGD are used to construct adversarial attacks from NSLKDD data.…”
Section: A Proactivementioning
confidence: 99%