2021
DOI: 10.48550/arxiv.2106.09993
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Accumulative Poisoning Attacks on Real-time Data

Abstract: Collecting training data from untrusted sources exposes machine learning services to poisoning adversaries, who maliciously manipulate training data to degrade the model accuracy. When trained on offline datasets, poisoning adversaries have to inject the poisoned data in advance before training, and the order of feeding these poisoned batches into the model is stochastic. In contrast, practical systems are more usually trained/fine-tuned on sequentially captured real-time data, in which case poisoning adversar… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
2

Relationship

1
1

Authors

Journals

citations
Cited by 2 publications
(3 citation statements)
references
References 50 publications
0
3
0
Order By: Relevance
“…This type of data poisoning attacks only manipulate training data and labels without the need to modify testing data after the victim model is deployed. Training-only poisoning attacks include both untargeted attacks where the adversary aims to degrade model performance on normal testing data [995,997,998,999], and targeted attacks in which the adversary aims to change the behavior of the model on particular testing inputs [1000,1001,1002]. Below we introduce some typical approaches.…”
Section: Training-only Poisoning Attacksmentioning
confidence: 99%
See 1 more Smart Citation
“…This type of data poisoning attacks only manipulate training data and labels without the need to modify testing data after the victim model is deployed. Training-only poisoning attacks include both untargeted attacks where the adversary aims to degrade model performance on normal testing data [995,997,998,999], and targeted attacks in which the adversary aims to change the behavior of the model on particular testing inputs [1000,1001,1002]. Below we introduce some typical approaches.…”
Section: Training-only Poisoning Attacksmentioning
confidence: 99%
“…However, it applies a greedy strategy to lower down model accuracy at each update step, which limits the step-wise destructive effect. Recent work [999] proposes accumulative poisoning attacks, where the model states are secretly (i.e., keeping accuracy in a reasonable range) activated towards a trigger batch by the accumulative phase, and the model is suddenly broken down by feeding in the trigger batch, before the monitor gets conscious of the attacks.…”
Section: Training-only Poisoning Attacksmentioning
confidence: 99%
“…Wang et al (2020) note that backdoor success is high in edge cases not seen in training and that backdoors that attack "rare" samples (such as only airplanes in a specific color in images, or a specific sentence in text) can be much more successful, as other users do not influence these predictions significantly. A number of variants of this attack exist (Costa et al, 2021;Pang et al, 2021;Fang et al, 2020;Baruch et al, 2019;Xie et al, 2019;Datta et al, 2021;Yoo & Kwak, 2022;Zhang et al, 2019;Sun et al, 2022), for example allowing for collusion between multiple users or generating additional data for the attacker. In this work we will focus broadly on the threat model of Bagdasaryan et al (2019); Wang et al (2020).…”
Section: Can You Backdoor Federated Learning?mentioning
confidence: 99%