2021
DOI: 10.1007/978-3-030-86514-6_3
|View full text |Cite
|
Sign up to set email alerts
|

AIMED-RL: Exploring Adversarial Malware Examples with Reinforcement Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
4
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 11 publications
(6 citation statements)
references
References 14 publications
0
4
0
Order By: Relevance
“…Another reinforcement learning approach is presented in [28], in which Labaca-Castro et al presented the AIMED-RL adversarial attack framework. This attack can generate adversarial examples that lead machine learning models to misclassify malicious files without compromising their functionality.…”
Section: Reinforcement Learning-based Attacksmentioning
confidence: 99%
“…Another reinforcement learning approach is presented in [28], in which Labaca-Castro et al presented the AIMED-RL adversarial attack framework. This attack can generate adversarial examples that lead machine learning models to misclassify malicious files without compromising their functionality.…”
Section: Reinforcement Learning-based Attacksmentioning
confidence: 99%
“…On the basis of gym-malware, there are multiple follow-up work [20,39,41,42,72,76,139] proposing problem-space black-box adversarial attacks against static PE malware detection models.…”
Section: 22mentioning
confidence: 99%
“…Naturally, they propose an improved RL-based adversarial attack framework of AMG-VAC on the basis of gym-malware [8,9] by adopting the variational actor-critic, which has been demonstrated to be the state-of-the-art performance in handling environments with combinatorially large state space. As previous RL-based adversarial attacks tend to generate homogeneous and long sequences of transformations, Labaca-Castro et al [72] thus present an RLbased adversarial attack framework of AIMED-RL as well. The main difference between AIMED-RL and other RL-based adversarial attacks is that AIMED-RL introduces a novel penalization to the reward function for increasing the diversity of the generated sequences of transformations while minimizing the corresponding lengths.…”
Section: 22mentioning
confidence: 99%
See 1 more Smart Citation
“…Numerous research has been published in the literature to test the robustness of machine learning-based systems against adversarial samples. Some researchers took the liberty of using gradient based algorithms [15] and genetic algorithms [16], [17], while tremendous work has been conducted on exploiting reinforcement learning [18], [19], [20], [21] to produce modifications in malware files and help them evade detection methods. Anderson et al [18] were one of the first to show that reinforcement learning (RL) can be successfully used to generate adversarial examples in the problem space for Windows Portable Executable (PE) files by introducing some semantic preserving actions to modify the malware.…”
mentioning
confidence: 99%