2018 26th European Signal Processing Conference (EUSIPCO) 2018
DOI: 10.23919/eusipco.2018.8553214
|View full text |Cite
|
Sign up to set email alerts
|

Adversarial Malware Binaries: Evading Deep Learning for Malware Detection in Executables

Abstract: Machine-learning methods have already been exploited as useful tools for detecting malicious executable files. They leverage data retrieved from malware samples, such as header fields, instruction sequences, or even raw bytes, to learn models that discriminate between benign and malicious software. However, it has also been shown that machine learning and deep neural networks can be fooled by evasion attacks (also referred to as adversarial examples), i.e., small changes to the input data that cause misclassif… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
202
1

Year Published

2018
2018
2023
2023

Publication Types

Select...
2
2
2

Relationship

0
6

Authors

Journals

citations
Cited by 244 publications
(208 citation statements)
references
References 18 publications
0
202
1
Order By: Relevance
“…On this dataset, we use the pretrained MalConv model released with the dataset. In addition, we also created a smaller dataset whose size and distribution is more in line with Kolosnjaji et al's evaluation [8], which we refer to as the Mini dataset. The Mini dataset was created by sampling 4,000 goodware and 4,598 malware samples from the Full dataset.…”
Section: Datasetsmentioning
confidence: 99%
See 2 more Smart Citations
“…On this dataset, we use the pretrained MalConv model released with the dataset. In addition, we also created a smaller dataset whose size and distribution is more in line with Kolosnjaji et al's evaluation [8], which we refer to as the Mini dataset. The Mini dataset was created by sampling 4,000 goodware and 4,598 malware samples from the Full dataset.…”
Section: Datasetsmentioning
confidence: 99%
“…Second, limited availability of representative datasets or robust public models limits the generality of existing studies. Existing attacks [8], [9] use victim models trained on very small datasets, and make various assumptions regarding their strategies. Therefore, the generalization effectiveness across production-scale models and the trade-offs between various proposed strategies is yet to be evaluated.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Kolosnjaji et al () have investigated the vulnerability of malware detection methods. They have used deep neural networks to learn from raw bytes of binaries.…”
Section: Adversarial Attacksmentioning
confidence: 99%
“…One of the challenges in developing such models are intelligent adversaries who are actively trying to evade them by perturbing the trained model (Grosse, Papernot, Manoharan, Backes, & McDaniel, 2017). Kolosnjaji et al (2018) have investigated the vulnerability of malware detection methods. They have used deep neural networks to learn from raw bytes of binaries.…”
Section: Adversarial Attacksmentioning
confidence: 99%