2020
DOI: 10.1109/access.2020.2974752
|View full text |Cite
|
Sign up to set email alerts
|

Adversarial Machine Learning Applied to Intrusion and Malware Scenarios: A Systematic Review

Abstract: Cyber-security is the practice of protecting computing systems and networks from digital attacks, which are a rising concern in the Information Age. With the growing pace at which new attacks are developed, conventional signature based attack detection methods are often not enough, and machine learning poses as a potential solution. Adversarial machine learning is a research area that examines both the generation and detection of adversarial examples, which are inputs specially crafted to deceive classifiers, … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
36
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
4
2
2

Relationship

0
8

Authors

Journals

citations
Cited by 139 publications
(56 citation statements)
references
References 42 publications
0
36
0
Order By: Relevance
“…ML-NIDS are gaining popularity [11,30], with the typical consequence that real attackers are turning their attention to the vulnerabilities of some ML components. This paper considers the so called adversarial attacks against ML-NIDS [62,92], and does not make any assumption on the specific ML-algorithm, nor on the analyzed data type of the ML-NIDS.…”
Section: Analysis and Classificationmentioning
confidence: 99%
See 1 more Smart Citation
“…ML-NIDS are gaining popularity [11,30], with the typical consequence that real attackers are turning their attention to the vulnerabilities of some ML components. This paper considers the so called adversarial attacks against ML-NIDS [62,92], and does not make any assumption on the specific ML-algorithm, nor on the analyzed data type of the ML-NIDS.…”
Section: Analysis and Classificationmentioning
confidence: 99%
“…By leveraging such information it is possible to understand the decision boundaries of the ML model and to craft specific samples that thwart the detection mechanism. These attacks are also known as white-box attacks [62].…”
Section: Adversarial Attacks Against Machine Learningmentioning
confidence: 99%
“…Lastly, evasion has also been a major topic among the researchers and practitioners conjugating ML with cybersecurity. As an example, the survey [31] focused on the malware evading ML-based detection techniques. It explored the adversarial attacks and the applications of adversarial ML for malware detection.…”
Section: Surveys On Machine Learning Applicationsmentioning
confidence: 99%
“…• limited interest in information hiding: as shown, only one recent work dealt with information hiding (specifically, in the context of mobile devices). As modern mal- [23] x x x x x [16] x x x x x [24] x x [22] x x fileless [31] x adversarial ML [17] x tools [25] x AI [14] x x x evasion, tools [34] x x x evasion [18] x x evasion [21] x x APT [28] x x cybersecurity [19] x evasion [26] x x x x [27] x x cybersecurity [36] x x x visualisation [30] x x [11] x x behavior analysis, visualisation [12] x analysis [37] x x [33] x x x evasion [29] x x x [15] x x x x x [20] x x stealth malware [32] x x x C&C communication [35] x x x OS openness [38] x x [39] x visualisation ware is increasingly exploiting some form of steganography, information hiding and obfuscation to launch attacks or exfiltrate data [42], [43], this consolidated trend should be taken into account. • lack of sufficient coverage of new threats: despite the vivacity of the topic, many works continue to focus on the "legacy" hazards, e.g., phishing.…”
Section: Contributions and Survey Architecturementioning
confidence: 99%
“…self-driving car) by proposing a possible attack to misclassify signs on the road. Several studies also have represented that AEs can be generated and utilized for disturbing malware detection [95], [96] and intrusion detection [97], [98]; the forensic investigators should be aware of data injection crime. If perpetrators have permission to modify or delete some training data, they can perform fatal attacks to AI system; it is the data modification crime.…”
Section: ) Training System Attackmentioning
confidence: 99%