2022
DOI: 10.3390/fi14040108
|View full text |Cite
|
Sign up to set email alerts
|

Adaptative Perturbation Patterns: Realistic Adversarial Learning for Robust Intrusion Detection

Abstract: Adversarial attacks pose a major threat to machine learning and to the systems that rely on it. In the cybersecurity domain, adversarial cyber-attack examples capable of evading detection are especially concerning. Nonetheless, an example generated for a domain with tabular data must be realistic within that domain. This work establishes the fundamental constraint levels required to achieve realism and introduces the adaptative perturbation pattern method (A2PM) to fulfill these constraints in a gray-box setti… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 22 publications
(4 citation statements)
references
References 26 publications
0
3
0
Order By: Relevance
“…GANs excelled in generating diverse data samples, enabling the detection models to learn from a broader spectrum of ransomware behaviors and characteristics, thereby improving detection accuracy [25]. Additionally, GANs proved effective in identifying vulnerabilities within detection systems by generating adversarial examples, which highlighted weaknesses and informed the development of more resilient detection mechanisms [26]. The adaptability of GANs, driven by their ability to learn complex data distributions, allowed for the continuous improvement of detection models as new ransomware variants emerged [27].…”
Section: Related Workmentioning
confidence: 99%
“…GANs excelled in generating diverse data samples, enabling the detection models to learn from a broader spectrum of ransomware behaviors and characteristics, thereby improving detection accuracy [25]. Additionally, GANs proved effective in identifying vulnerabilities within detection systems by generating adversarial examples, which highlighted weaknesses and informed the development of more resilient detection mechanisms [26]. The adaptability of GANs, driven by their ability to learn complex data distributions, allowed for the continuous improvement of detection models as new ransomware variants emerged [27].…”
Section: Related Workmentioning
confidence: 99%
“…The literature study shows that there has been considerable research on the impact of adversarial attacks on machine learning models [8,9,10]. However, their feasibility in domainconstrained applications, such as intrusion detection systems, is still in its early stages [11,12,13]. Adversarial attacks can be performed in either white-box or black-box settings.…”
Section: Introductionmentioning
confidence: 99%
“…A vital challenge faced nowadays by federal and business decision-makers for choosing cost-efficient mitigations to scale back risks from supply chain attacks, particularly those from adversarial attacks that are complex, hard to detect and can lead to severe consequences. Focusing on adversarial attacks and how these can alter the performance of AI based detection systems, the authors in [ 13 ] propose a novel robust solution. Their proposed model was evaluated in both Enterprise and Internet of Things (IoT) networks and is proven to be efficient against adversarial classification attacks and adversarial training attacks.…”
mentioning
confidence: 99%