2019
DOI: 10.1007/978-3-030-37231-6_22
|View full text |Cite
|
Sign up to set email alerts
|

On Effectiveness of Adversarial Examples and Defenses for Malware Classification

Abstract: Artificial neural networks have been successfully used for many different classification tasks including malware detection and distinguishing between malicious and non-malicious programs. Although artificial neural networks perform very well on these tasks, they are also vulnerable to adversarial examples. An adversarial example is a sample that has minor modifications made to it so that the neural network misclassifies it. Many techniques have been proposed, both for crafting adversarial examples and for hard… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
10
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 14 publications
(16 citation statements)
references
References 22 publications
0
10
0
Order By: Relevance
“…Machine learning techniques were not primarily designed to work with cyber security so an evasion can easily fool the ML [608][609][610]. Research is going on to provide a solution by having adversarial training [611][612][613][614][615].…”
Section: ) Techniques and Methodsmentioning
confidence: 99%
“…Machine learning techniques were not primarily designed to work with cyber security so an evasion can easily fool the ML [608][609][610]. Research is going on to provide a solution by having adversarial training [611][612][613][614][615].…”
Section: ) Techniques and Methodsmentioning
confidence: 99%
“…Therefore, adversarial learning attacks and protections on deep learning based Android malware analysis attracts considerable attention currently. In collected studies, the researchers investigate the effectiveness of the adversarial attacks against DL-based Android malware analyzers [26,57,78,92], the robustness of defense strategies [156], or both of them [90,128,157].…”
Section: Adversarial Learning Attacks and Protectionsmentioning
confidence: 99%
“…OPEN ISSUE Verifying the existence of certain categorical characteristics of Android applications, such as permissions/API calls by static analysis or certain malicious behaviors by dynamic analysis, is widely used to construct feature vectors [5,7,18,22,29,40,44,45,47,57,67,68,74,78,90,92,101,105,109,111,113,128,131,143,145,156,157,166,170,172,174,179,185,197,198,203,209]. The researchers usually build a look-up table to list all the potential features, based on prior knowledge or feature selection approaches, and a fixed-size binary feature vector is created to represent the feature information for each application.…”
Section: Rq21 How Features Are Processed For Model Training?mentioning
confidence: 99%
See 1 more Smart Citation
“…DL-based malware detection approaches are susceptible to adversarial attacks [33][34][35][36][37]. Adversarial modifications by manipulating only a small fraction of raw binary data may lead to misclassification.…”
Section: Adversarial Attack Against Malware Detection Modelmentioning
confidence: 99%