Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security 2017
DOI: 10.1145/3128572.3140451
|View full text |Cite
|
Sign up to set email alerts
|

Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization

Abstract: A number of online services nowadays rely upon machine learning to extract valuable information from data collected in the wild. This exposes learning algorithms to the threat of data poisoning, i.e., a coordinate attack in which a fraction of the training data is controlled by the attacker and manipulated to subvert the learning process. To date, these attacks have been devised only against a limited class of binary learning algorithms, due to the inherent complexity of the gradient-based procedure used to op… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
395
1

Year Published

2018
2018
2023
2023

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 435 publications
(397 citation statements)
references
References 24 publications
1
395
1
Order By: Relevance
“…Two primary threat models are proposed in literature. (i) Poisoning attacks, in which the adversary pollutes the training data to eventually compromise the ML systems [9,41,61,62]. Such attacks can be further categorized as targeted and untargeted attacks.…”
Section: Related Workmentioning
confidence: 99%
“…Two primary threat models are proposed in literature. (i) Poisoning attacks, in which the adversary pollutes the training data to eventually compromise the ML systems [9,41,61,62]. Such attacks can be further categorized as targeted and untargeted attacks.…”
Section: Related Workmentioning
confidence: 99%
“…Other than adversarial examples, we could also leverage data poisoning attacks [65][66][67][68][69][70][71][72] to defend against inference attacks. Specifically, an attacker needs to train an ML classifier in inference attacks.…”
Section: Data Poisoning Attacks Based Defensesmentioning
confidence: 99%
“…An important property of the regions determined by Algorithm 2 is stated by the following proposition. where L k (R) defined in (6). In other words, the LiDAR ray with angle θ k intersects the same obstacle edge regardless of the robot position.…”
Section: Partitioning the Workpacementioning
confidence: 99%
“…Motivated by the urgency to study the safety, reliability, and potential problems that can rise and impact the society by the deployment of AI-enabled systems in the real world, several works in the literature focused on the problem of designing deep neural networks that are robust to the so-called adversarial examples [2][3][4][5][6][7][8]. Unfortunately, these techniques focus mainly on the robustness of the learning algorithm with respect to data outliers without providing guarantees in terms of safety and reliability of the decisions taken by these neural networks.…”
Section: Introductionmentioning
confidence: 99%