2020 IEEE Symposium on Security and Privacy (SP) 2020
DOI: 10.1109/sp40000.2020.00073
|View full text |Cite
|
Sign up to set email alerts
|

Intriguing Properties of Adversarial ML Attacks in the Problem Space

Abstract: Recent research efforts on adversarial ML have investigated problem-space attacks, focusing on the generation of real evasive objects in domains where, unlike images, there is no clear inverse mapping to the feature space (e.g., software). However, the design, comparison, and real-world implications of problem-space attacks remain underexplored.This paper makes two major contributions. First, we propose a novel formalization for adversarial ML evasion attacks in the problem-space, which includes the definition… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
178
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 177 publications
(197 citation statements)
references
References 48 publications
0
178
0
Order By: Relevance
“…It is extremely challenging for the malware classifier to differentiate between malware that is trying to read a non-existent registry key as an added adversarial no-op and a benign application functionality, e.g., trying to find a registry key containing information from previous runs and creating it if it doesn't exist (for instance, during the first run of the application). This makes our problem-space attack robust to preprocessing [38].…”
Section: Methodology 31 Attacking Api Call-based Malware Classifiersmentioning
confidence: 99%
See 2 more Smart Citations
“…It is extremely challenging for the malware classifier to differentiate between malware that is trying to read a non-existent registry key as an added adversarial no-op and a benign application functionality, e.g., trying to find a registry key containing information from previous runs and creating it if it doesn't exist (for instance, during the first run of the application). This makes our problem-space attack robust to preprocessing [38].…”
Section: Methodology 31 Attacking Api Call-based Malware Classifiersmentioning
confidence: 99%
“…This is due to the fact that adversarial training is less effective against random attacks like ours, because a different stochastic adversarial sequence is generated every time, making it challenging for the classifier to generalize from one adversarial sequence to another. More effective RNN defense methods, including domain specific methods, e.g., systems that measure CPU usage [35], contain irregular API call subsequences [27] (such as the no-op API calls used in this paper), or otherwise assess the plausibility of our attack [38], in order to detect adversarial examples, will be a part of our future work.…”
Section: Defenses and Mitigation Techniquesmentioning
confidence: 99%
See 1 more Smart Citation
“…Depending on the different levels of knowledge held by an attacker, attack scenarios can be categorized into three different classes: white-box (i.e., perfect knowledge), gray-box (i.e., limited knowledge) and black-box (i.e., zero knowledge) [32], [33]. The less information available to an attacker, the closer the attack scenario is to black-box.…”
Section: A Threat Modelmentioning
confidence: 99%
“…Cylance's PROTECT anti-malware engine which deploys DL models has been recently evaded by adversarial attacks [38]. Thus, ML based malware detectors are under the constant threat of adversarial attacks [39][40][41][42][43][44][45]. This demands for the development of robust and secure learning models to be deployed in malware detectors which can withstand the adaptive adversarial attacks [46][47][48][49][50][51][52].…”
Section: Introductionmentioning
confidence: 99%