Proceedings of the 9th ACM SIGCOMM Conference on Internet Measurement 2009
DOI: 10.1145/1644893.1644895
|View full text |Cite
|
Sign up to set email alerts
|

Antidote

Abstract: Statistical machine learning techniques have recently garnered increased popularity as a means to improve network design and security. For intrusion detection, such methods build a model for normal behavior from training data and detect attacks as deviations from that model. This process invites adversaries to manipulate the training data so that the learned model fails to detect subsequent attacks.We evaluate poisoning techniques and develop a defense, in the context of a particular anomaly detector-namely th… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
14
0

Year Published

2015
2015
2021
2021

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 243 publications
(15 citation statements)
references
References 32 publications
1
14
0
Order By: Relevance
“…As ML models need to be updated to recognize new threats, adversaries may aim to poison the ML model by injecting misleading training data. Rubinstein et al [37] and Biggio et al [38] demonstrated such poisoning attacks against anomaly detectors and support vector machines (SVM). Specifically for mobile malware detection systems, Chen et al [39] discussed how to automate poisoning attacks and defenses.…”
Section: Attacks On Machine Learning Models and Deep Neural Networkmentioning
confidence: 99%
“…As ML models need to be updated to recognize new threats, adversaries may aim to poison the ML model by injecting misleading training data. Rubinstein et al [37] and Biggio et al [38] demonstrated such poisoning attacks against anomaly detectors and support vector machines (SVM). Specifically for mobile malware detection systems, Chen et al [39] discussed how to automate poisoning attacks and defenses.…”
Section: Attacks On Machine Learning Models and Deep Neural Networkmentioning
confidence: 99%
“…When frequently updating an ML model to account for new threats, malicious adversaries can launch causative/data poisoning attacks to inject misleading training data intentionally so that an ML model becomes ineffective. For example, various poisoning attacks on specific ML algorithms [17,18] were able to bypass intrusion detection systems. More recently, Chen et al [19] demonstrated how to automate poisoning attacks against malware detection systems.…”
Section: Machine Learning In Adversarial Settingsmentioning
confidence: 99%
“…Meanwhile, dynamic threshold defense recommended dynamically adjusting threshold values in SpamBayes. Although this approach increased accuracy with legitimate emails, it had difficulty to correctly label spam emails. Facing poisoning attacks against principal component analysis (PCA) subspace anomaly detection method in backbone network, Rubinstein et al (2009) proposed a robust PCA based defense approach. It maximized Median Absolute Deviation (MAD) instead of variance to compute principal components, and used a robust threshold value based on Laplace distribution instead of Gaussian.…”
Section: Robust and Secure Learning Strategiesmentioning
confidence: 99%