Intrusion Detection Systems (IDS) are necessary for the system monitoring. However they produce a huge quantity of alerts. Alert correlation is a process applied to the IDS alerts in order to reduce their number. In this paper we propose a new approach for alert correlation which enables the integration of new information to the alert correlation process: Security operator's knowledge and preferences. This information concerns the monitoring system and the risk level of each alert in according for instance to the operator's experiences. The representation and the reasoning on these knowledge and preferences are done using the Qualitative Choice Logic (QCL) and its extensions: Prioritized Qualitative Choice Logic (PQCL) and Positive Qualitative Choice Logic (QCL+). Experimental results are achieved on data from a real system monitoring. The result is a set of ordered alerts which satisfies operator's criteria.
The advancement in network security threats led to the development of new Intrusion Detection Systems(IDS) that rely on deep learning algorithms known as deep IDS. Along with other systems based on deep learning, deep IDS suffer from adversarial examples: malicious inputs aiming to change the prediction of a machine learning/deep learning model. Protecting deep learning against adversarial examples remains an open challenge. In this paper, we propose “NIDS-Defend” a framework to enhance the robustness of Network IDS against adversarial attacks. Our framework is composed of two layers: a statistical test and a classifier that together detect adversarial examples in real-time. The detection process consists of two steps: (1) flagging flows that contain adversarial examples with a statistical test, and (2) extracting individual adversarial examples in the previously flagged flows with a classifier. Our approach is evaluated on binary IDS with the NSL-KDD dataset. To generate adversarial examples, the crafting methods used are (1) Boundary attack and (2) HopSkipJumpAttack. We investigate the vulnerabilities of a Network IDS against adversarial examples, then apply our defense. The statistical test can confidently distinguish adversarial flows with more than 95% accuracy, and the classifier detects individual adversarial examples with more than 80% accuracy. We also show that our framework detects adversarial examples crafted by an adversary aware of the defense and confirm the effectiveness of our solution against adversarial attacks.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.