2018
DOI: 10.1007/978-3-030-00470-5_23
|View full text |Cite
|
Sign up to set email alerts
|

Generic Black-Box End-to-End Attack Against State of the Art API Call Based Malware Classifiers

Abstract: In this paper, we present a black-box attack against API call based machine learning malware classifiers, focusing on generating adversarial sequences combining API calls and static features (e.g., printable strings) that will be misclassified by the classifier without affecting the malware functionality. We show that this attack is effective against many classifiers due to the transferability principle between RNN variants, feed forward DNNs, and traditional machine learning classifiers such as SVM. We also i… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
170
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
4
3
1

Relationship

1
7

Authors

Journals

citations
Cited by 163 publications
(178 citation statements)
references
References 15 publications
0
170
0
Order By: Relevance
“…Due to the relative simplicity of the PDF file structure, it is easy to alter the file without changing the original content. Rosenberg et al [37] proposed a black-box attack against machine learning based malware detectors in Windows OS based on analysing API calls. The attack algorithm iteratively added no-op system calls (which are extracted from benign softwares) to the binary code.…”
Section: A Adversarial Attacks To Malware Detectionmentioning
confidence: 99%
“…Due to the relative simplicity of the PDF file structure, it is easy to alter the file without changing the original content. Rosenberg et al [37] proposed a black-box attack against machine learning based malware detectors in Windows OS based on analysing API calls. The attack algorithm iteratively added no-op system calls (which are extracted from benign softwares) to the binary code.…”
Section: A Adversarial Attacks To Malware Detectionmentioning
confidence: 99%
“…Specifically, adversarial examples are generated from benign samples by adding small perturbation that is imperceptible to human eyes, i.e., X * = X + δ X and F (X * ) = Y * , where X * is an adversarial example, δ X is the perturbation, and Y * is the adversarial label. Prior works [9], [10] have shown that adversarial examples are detrimental to many systems in the real world. For instance, under adversarial example attacks, the automatic driving system may take a stop sign as an acceleration sign [9] and malware can evade the detection systems [10].…”
Section: Neural Network and Adversarial Examplesmentioning
confidence: 99%
“…Prior works [9], [10] have shown that adversarial examples are detrimental to many systems in the real world. For instance, under adversarial example attacks, the automatic driving system may take a stop sign as an acceleration sign [9] and malware can evade the detection systems [10]. Depending on whether there is a specified target for misclassification, adversarial example attacks can be categorized into two types, i.e., targeted and untargeted attacks.…”
Section: Neural Network and Adversarial Examplesmentioning
confidence: 99%
“…2. For example, during the training stage [10], [11], dataset [12], tools and architecture/model are vulnerable to security attacks, such as adding parallel layers or neurons [13], [14], to perform security attacks [15], [16]. Similarly, during the hardware implementation and inference stages, computational hardware and real-time dataset can be exploited to perform security attacks [17], [18].…”
Section: A Security Threats In Dnn Modulesmentioning
confidence: 99%