2022
DOI: 10.3390/app12136451
|View full text |Cite
|
Sign up to set email alerts
|

Adversarial Robust and Explainable Network Intrusion Detection Systems Based on Deep Learning

Abstract: The ever-evolving cybersecurity environment has given rise to sophisticated adversaries who constantly explore new ways to attack cyberinfrastructure. Recently, the use of deep learning-based intrusion detection systems has been on the rise. This rise is due to deep neural networks (DNN) complexity and efficiency in making anomaly detection activities more accurate. However, the complexity of these models makes them black-box models, as they lack explainability and interpretability. Not only is the DNN perceiv… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 14 publications
(2 citation statements)
references
References 29 publications
0
2
0
Order By: Relevance
“…In addition, the proposed approach performs well on imbalanced data and has a better ability to identify different cyberattacks as it has been mentioned in the previous sub-sections. Therefore, RF-2NIDS can be considered as a robust model against adversarial attacks, whereby the only purpose is to trick the learnt model to make incorrect results (McCarthy et al, 2022;Sauka et al, 2022;Zhao et al, 2022). This will be proven in the next contribution.…”
Section: Comparison With Existing Methodsmentioning
confidence: 99%
“…In addition, the proposed approach performs well on imbalanced data and has a better ability to identify different cyberattacks as it has been mentioned in the previous sub-sections. Therefore, RF-2NIDS can be considered as a robust model against adversarial attacks, whereby the only purpose is to trick the learnt model to make incorrect results (McCarthy et al, 2022;Sauka et al, 2022;Zhao et al, 2022). This will be proven in the next contribution.…”
Section: Comparison With Existing Methodsmentioning
confidence: 99%
“…Debicha, et al [35] investigated the impact of the adversarial attack on deep learning-based IDS and proposed a defence mechanism using adversarial training. Sauka, et al [36] focused on the problem of adversarial vulnerability and explainability of deep learning-based IDS and proposed adversarial training and AI explainable framework based on SHAP. Meanwhile, [37] under study the impact of adversarial attacks on ML and deep-learning based IDS for IoT security domain.…”
Section: Background a Related Workmentioning
confidence: 99%