2018 IEEE Security and Privacy Workshops (SPW) 2018
DOI: 10.1109/spw.2018.00020
|View full text |Cite
|
Sign up to set email alerts
|

Adversarial Deep Learning for Robust Detection of Binary Encoded Malware

Abstract: Malware is constantly adapting in order to avoid detection. Model based malware detectors, such as SVM and neural networks, are vulnerable to so-called adversarial examples which are modest changes to detectable malware that allows the resulting malware to evade detection. Continuous-valued methods that are robust to adversarial examples of images have been developed using saddle-point optimization formulations. We are inspired by them to develop similar methods for the discrete, e.g. binary, domain which char… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
112
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
5
1
1

Relationship

1
6

Authors

Journals

citations
Cited by 137 publications
(126 citation statements)
references
References 24 publications
1
112
0
Order By: Relevance
“…In [1], methods are introduced that are capable of generating functionally preserved adversarial malware examples in the binary domain. Using the saddle-point formulation, they incorporate the adversarial examples into the training of models that are robust to them.…”
Section: Adversarial Malwarementioning
confidence: 99%
See 3 more Smart Citations
“…In [1], methods are introduced that are capable of generating functionally preserved adversarial malware examples in the binary domain. Using the saddle-point formulation, they incorporate the adversarial examples into the training of models that are robust to them.…”
Section: Adversarial Malwarementioning
confidence: 99%
“…We follow the notation in [1]. The data distribution D contains tuples of binary representations of executable files and their corresponding labels.…”
Section: Context 31 Notation and Saddle-point Formulationmentioning
confidence: 99%
See 2 more Smart Citations
“…Many of the studies on malware classification stem from the Microsoft 2015 Kaggle competition [9], in which deep learning methods based on the visual feature of malware obtained an accuracy of 99.8%. Yet deep learning-based methods suffer from several problems including (i) the complexity of the model structure prevents interpretability and explainability, (ii) the features extracted (especially visual feature) consume too much memory and their validity remains vague, (iii) the model can be attacked by gradient based adversarial methods as in image classification tasks [10]- [12]. Recently, fuzzy theory has been incorporated into deep learning system to provide interpretability and robustness [13]- [16].…”
Section: Introductionmentioning
confidence: 99%