2020
DOI: 10.48550/arxiv.2010.09569
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Against All Odds: Winning the Defense Challenge in an Evasion Competition with Diversification

Abstract: Machine learning-based systems for malware detection operate in a hostile environment. Consequently, adversaries will also target the learning system and use evasion attacks to bypass the detection of malware. In this paper, we outline our learning-based system PEberus that got the first place in the defender challenge of the Microsoft Evasion Competition, resisting a variety of attacks from independent attackers. Our system combines multiple, diverse defenses: we address the semantic gap, use various classifi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 28 publications
0
3
0
Order By: Relevance
“…As the first place in the defender challenge of the Microsoft's 2020 Machine Learning Security Evasion Competition [86], Quiring et al present a combinatorial framework of adversarial defenses -PEberus [104] based on the following three defense methods as follows.…”
Section: Other Defense Methodsmentioning
confidence: 99%
“…As the first place in the defender challenge of the Microsoft's 2020 Machine Learning Security Evasion Competition [86], Quiring et al present a combinatorial framework of adversarial defenses -PEberus [104] based on the following three defense methods as follows.…”
Section: Other Defense Methodsmentioning
confidence: 99%
“…Several empirical defense methods have been proposed to improve robustness of ML classifiers [23,63]. Íncer Romeo et al [36] compose manually crafted Boolean features with a classifier that is constrained to be monotonically increasing with respect to selected inputs.…”
Section: Related Workmentioning
confidence: 99%
“…Demontis et al [23] show that the sensitivity of linear support vector machines to adversarial perturbations can be reduced by training with ℓ ∞ regularization of weights. In another work, Quiring et al [63] take advantage of heuristic-based semantic gap detectors and an ensemble of feature classifiers to improve empirical robustness. Compared to our work on certified adversarial defenses, these approaches do not provide formal guarantees.…”
Section: Related Workmentioning
confidence: 99%