2018 IEEE Symposium on Security and Privacy (SP) 2018
DOI: 10.1109/sp.2018.00057
|View full text |Cite
|
Sign up to set email alerts
|

Manipulating Machine Learning: Poisoning Attacks and Countermeasures for Regression Learning

Abstract: As machine learning becomes widely used for automated decisions, attackers have strong incentives to manipulate the results and models generated by machine learning algorithms. In this paper, we perform the first systematic study of poisoning attacks and their countermeasures for linear regression models. In poisoning attacks, attackers deliberately influence the training data to manipulate the results of a predictive model. We propose a theoretically-grounded optimization framework specifically designed for l… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
437
1
3

Year Published

2018
2018
2024
2024

Publication Types

Select...
5
3
2

Relationship

1
9

Authors

Journals

citations
Cited by 598 publications
(497 citation statements)
references
References 34 publications
1
437
1
3
Order By: Relevance
“…Another research direction may be that of testing our defense against training-time poisoning attacks [2], [5], [36]- [38].…”
Section: Discussionmentioning
confidence: 99%
“…Another research direction may be that of testing our defense against training-time poisoning attacks [2], [5], [36]- [38].…”
Section: Discussionmentioning
confidence: 99%
“…Other than adversarial examples, we could also leverage data poisoning attacks [65][66][67][68][69][70][71][72] to defend against inference attacks. Specifically, an attacker needs to train an ML classifier in inference attacks.…”
Section: Data Poisoning Attacks Based Defensesmentioning
confidence: 99%
“…Helen does not prevent a malicious party from choosing a bad dataset for the coopetitive computation (e.g., in an attempt to alter the computation result). In particular, Helen does not prevent poisoning attacks [48,19]. MPC protocols generally do not protect against bad inputs because there is no way to ensure that a party provides true data.…”
Section: Threat Modelmentioning
confidence: 99%