Leveraging Deep Reinforcement Learning Technique for Intrusion Detection in SCADA Infrastructure
Frantzy Mesadieu,
Damiano Torre,
Anitha Chennameneni
Abstract:The prevalence of cyber-attacks perpetrated over the last two decades, including coordinated attempts to breach targeted organizations, has drastically and systematically exposed some of the more critical vulnerabilities existing in our cyber ecosystem. Particularly in Supervisory Control and Data Acquisition (SCADA) systems with targeted attacks aiming to bypass signature-based protocols, attempting to gain control over operational processes. In the past, researchers utilized deep learning and reinforcement l… Show more
“…Moreover, other prior works have discussed the proliferation of network threats and cyber-attacks in recent years and emphasized the need for the development of sophisticated IDSs capable of not only detecting but also effectively mitigating such threats [29,30]. Additionally, the integration of deep learning techniques into IDSs has shown promising results in real-time anomaly detection for IoT and SCADA infrastructures [31,32]. This diversified foundation further supports this study's exploration of XAI within network IDSs.…”
Section: Introductionsupporting
confidence: 66%
“…They emphasize the need for XAI to provide clarity on AI decisions, enhancing security measures against cyber threats. On the other hand, the works in [29,31,32] propose frameworks to enhance IDSs in the context of IoT and SCADA Systems. It is also worth mentioning [29,30,79], which do the same for network intrusion detection, leveraging ensemble learning and neural network applications.…”
Section: Related Workmentioning
confidence: 99%
“…This research introduces a novel approach to intrusion detection that is distinctly advanced beyond previous contributions and addresses the limitations observed in the referenced studies. Unlike the methodologies presented in [29][30][31][32][76][77][78][79][80], which primarily focus on leveraging existing models and algorithms for feature selection and detection or on XAI but applied to different scenarios, such as IoT, autonomous vehicles, or healthcare, this study integrates these techniques within a unique framework that employs explainable AI (XAI) to enhance transparency and interpretability in intrusion detection systems in the context of network intrusion detection. The integration of XAI not only builds on the robust detection capabilities highlighted in [71,78] but also provides the benefits of interpretability and trust [70,71].…”
The exponential growth of network intrusions necessitates the development of advanced artificial intelligence (AI) techniques for intrusion detection systems (IDSs). However, the reliance on AI for IDSs presents several challenges, including the performance variability of different AI models and the opacity of their decision-making processes, hindering comprehension by human security analysts. In response, we propose an end-to-end explainable AI (XAI) framework tailored to enhance the interpretability of AI models in network intrusion detection tasks. Our framework commences with benchmarking seven black-box AI models across three real-world network intrusion datasets, each characterized by distinct features and challenges. Subsequently, we leverage various XAI models to generate both local and global explanations, shedding light on the underlying rationale behind the AI models’ decisions. Furthermore, we employ feature extraction techniques to discern crucial model-specific and intrusion-specific features, aiding in understanding the discriminative factors influencing the detection outcomes. Additionally, our framework identifies overlapping and significant features that impact multiple AI models, providing insights into common patterns across different detection approaches. Notably, we demonstrate that the computational overhead incurred by generating XAI explanations is minimal for most AI models, ensuring practical applicability in real-time scenarios. By offering multi-faceted explanations, our framework equips security analysts with actionable insights to make informed decisions for threat detection and mitigation. To facilitate widespread adoption and further research, we have made our source code publicly available, serving as a foundational XAI framework for IDSs within the research community.
“…Moreover, other prior works have discussed the proliferation of network threats and cyber-attacks in recent years and emphasized the need for the development of sophisticated IDSs capable of not only detecting but also effectively mitigating such threats [29,30]. Additionally, the integration of deep learning techniques into IDSs has shown promising results in real-time anomaly detection for IoT and SCADA infrastructures [31,32]. This diversified foundation further supports this study's exploration of XAI within network IDSs.…”
Section: Introductionsupporting
confidence: 66%
“…They emphasize the need for XAI to provide clarity on AI decisions, enhancing security measures against cyber threats. On the other hand, the works in [29,31,32] propose frameworks to enhance IDSs in the context of IoT and SCADA Systems. It is also worth mentioning [29,30,79], which do the same for network intrusion detection, leveraging ensemble learning and neural network applications.…”
Section: Related Workmentioning
confidence: 99%
“…This research introduces a novel approach to intrusion detection that is distinctly advanced beyond previous contributions and addresses the limitations observed in the referenced studies. Unlike the methodologies presented in [29][30][31][32][76][77][78][79][80], which primarily focus on leveraging existing models and algorithms for feature selection and detection or on XAI but applied to different scenarios, such as IoT, autonomous vehicles, or healthcare, this study integrates these techniques within a unique framework that employs explainable AI (XAI) to enhance transparency and interpretability in intrusion detection systems in the context of network intrusion detection. The integration of XAI not only builds on the robust detection capabilities highlighted in [71,78] but also provides the benefits of interpretability and trust [70,71].…”
The exponential growth of network intrusions necessitates the development of advanced artificial intelligence (AI) techniques for intrusion detection systems (IDSs). However, the reliance on AI for IDSs presents several challenges, including the performance variability of different AI models and the opacity of their decision-making processes, hindering comprehension by human security analysts. In response, we propose an end-to-end explainable AI (XAI) framework tailored to enhance the interpretability of AI models in network intrusion detection tasks. Our framework commences with benchmarking seven black-box AI models across three real-world network intrusion datasets, each characterized by distinct features and challenges. Subsequently, we leverage various XAI models to generate both local and global explanations, shedding light on the underlying rationale behind the AI models’ decisions. Furthermore, we employ feature extraction techniques to discern crucial model-specific and intrusion-specific features, aiding in understanding the discriminative factors influencing the detection outcomes. Additionally, our framework identifies overlapping and significant features that impact multiple AI models, providing insights into common patterns across different detection approaches. Notably, we demonstrate that the computational overhead incurred by generating XAI explanations is minimal for most AI models, ensuring practical applicability in real-time scenarios. By offering multi-faceted explanations, our framework equips security analysts with actionable insights to make informed decisions for threat detection and mitigation. To facilitate widespread adoption and further research, we have made our source code publicly available, serving as a foundational XAI framework for IDSs within the research community.
With the popularity of the Internet and the increase in the level of information technology, cyber attacks have become an increasingly serious problem. They pose a great threat to the security of individuals, enterprises, and the state. This has made network intrusion detection technology critically important. In this paper, a malicious traffic detection model is constructed based on a decision tree classifier of entropy and a proximal policy optimisation algorithm (PPO) of deep reinforcement learning. Firstly, the decision tree idea in machine learning is used to make a preliminary classification judgement on the dataset based on the information entropy. The importance score of each feature in the classification work is calculated and the features with lower contributions are removed. Then, it is handed over to the PPO algorithm model for detection. An entropy regularity term is introduced in the process of the PPO algorithm update. Finally, the deep reinforcement learning algorithm is used to continuously train and update the parameters during the detection process, and finally, the detection model with higher accuracy is obtained. Experiments show that the binary classification accuracy of the malicious traffic detection model based on the deep reinforcement learning PPO algorithm can reach 99.17% under the CIC-IDS2017 dataset used in this paper.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.