2021
DOI: 10.1145/3473039
|View full text |Cite
|
Sign up to set email alerts
|

Adversarial EXEmples

Abstract: Recent work has shown that adversarial Windows malware samples—referred to as adversarial EXE mples in this article—can bypass machine learning-based detection relying on static code analysis by perturbing relatively few input bytes. To preserve malicious functionality, previous attacks either add bytes to existing non-functional areas of the file, potentially limiting their effectiveness, or require running computationally demanding validation steps to discard malware variants that do … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

2
35
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
1
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 73 publications
(55 citation statements)
references
References 19 publications
2
35
0
Order By: Relevance
“…By varying the aggressiveness of smoothing we examine tradeoffs between robustness certification and accuracy. We find that it is possible to maintain a high accuracy of 91% while guaranteeing robustness to adversarial edits of up to 128 bytes on average, which exceeds edit distances of two published evasion attacks [20,22]. This suggests potential for operationalizing certifications of static malware detection, in some cases.…”
Section: Introductionmentioning
confidence: 85%
See 2 more Smart Citations
“…By varying the aggressiveness of smoothing we examine tradeoffs between robustness certification and accuracy. We find that it is possible to maintain a high accuracy of 91% while guaranteeing robustness to adversarial edits of up to 128 bytes on average, which exceeds edit distances of two published evasion attacks [20,22]. This suggests potential for operationalizing certifications of static malware detection, in some cases.…”
Section: Introductionmentioning
confidence: 85%
“…The certified radii we observe are close to the best radii theoretically achievable using our mechanism. For the Levenshtein byte-level edit distance threat model, we obtain radii of a few hundred bytes in size, which can certifiably defend against attacks that edit headers of PE files [20,22,57]. However, certifying robustness against more powerful attacks that modify thousands or millions of bytes remains an open challenge.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Our work is dedicated to detecting query-based black-box attacks and the function of this module is to reproduce typical query-based black-box attacks. Since malware adversarial attacks were investigated later than image adversarial attacks and most of the AEs are generated on feature vectors or substitute models [28,[36][37][38][39][40], there are not many query-based black-box attack methods that can generate real AE files and publish open source codes [14,15,41,42]. We choose two advanced score-based black-box attack frameworks [14,15].…”
Section: Reproduce Black-box Attacksmentioning
confidence: 99%
“…[46] proposed an adaptable white-box threat that is conscious of the defense mechanism and tries to overcome it. [47] Moreover, they spoke about the restrictions of their methodology and how it may be improved in the future to attack malware classifications based on the dynamic assessment. [48] generated a series of problem-space modifications that produce UAPs in the appropriate feature-space embedding and analyzed their performance across attack models of various rates of actual attacker information.…”
Section: Related Workmentioning
confidence: 99%