2014 IEEE Symposium on Security and Privacy 2014
DOI: 10.1109/sp.2014.20
|View full text |Cite
|
Sign up to set email alerts
|

Practical Evasion of a Learning-Based Classifier: A Case Study

Abstract: Abstract-Learning-based classifiers are increasingly used for detection of various forms of malicious data. However, if they are deployed online, an attacker may attempt to evade them by manipulating the data. Examples of such attacks have been previously studied under the assumption that an attacker has full knowledge about the deployed classifier. In practice, such assumptions rarely hold, especially for systems deployed online. A significant amount of information about a deployed classifier system can be ob… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

2
218
0

Year Published

2014
2014
2021
2021

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 240 publications
(237 citation statements)
references
References 37 publications
2
218
0
Order By: Relevance
“…Wagner and Soto [14] demonstrated the mimicry attack against a host-based IDS that mimics the legitimate sequence of system calls. Srndic and Laskov [15] presented a mimicry attack against PDF Rate [16], a system to detect malicious pdf files based on the random forest classifier.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…Wagner and Soto [14] demonstrated the mimicry attack against a host-based IDS that mimics the legitimate sequence of system calls. Srndic and Laskov [15] presented a mimicry attack against PDF Rate [16], a system to detect malicious pdf files based on the random forest classifier.…”
Section: Related Workmentioning
confidence: 99%
“…Srndic and Laskov [9] applied a gradient descent-kernel density estimation attack against the PDF Rate system that uses SVM and random forest classifier. Biggio et al [10] demonstrated a gradient descent component against the SVM classifier and a neural network.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Drift refers to the non-stationarity of data where the data distribution changes over time. This drift can be natural gradual drift, for example, changes to a user's preference over time, or adversarial drift where an adversary changes the data to purposefully decrease the classification accuracy [14,25,53]. For example, using malware re-packaging toolkits, known as Fully Un-Detectable or FUD crypters, malware vendors repackage malware to evade anti-virus tools [7].…”
Section: Active Learning For Securitymentioning
confidence: 99%
“…For instance, an attacker inserts positively labeled datapoints with words of heavy negative sentiment.To our knowledge, our work is the first to demonstrate adversarial machine learning attacks in such a practical scenario. Recent work has also considered practical scenarios, but focused on evasion attacks [20,21].…”
Section: Introductionmentioning
confidence: 99%