2022
DOI: 10.1109/mnet.011.2000783
|View full text |Cite
|
Sign up to set email alerts
|

Coordinated Backdoor Attacks against Federated Learning with Model-Dependent Triggers

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
33
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
3
2
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 33 publications
(33 citation statements)
references
References 10 publications
0
33
0
Order By: Relevance
“…The focus of these attacks is to misclassify client samples by injecting poisoned local models, rather than reconstructing samples from shared models. Poisoning attacks on FL are usually mounted via manipulating either training data [11], [12], [13], [6], [23] or local training process [6], [14], [15], [16] of an edge client. We briefly review related research efforts of poisoning attacks on FL in the following.…”
Section: A Poisoning Attacksmentioning
confidence: 99%
“…The focus of these attacks is to misclassify client samples by injecting poisoned local models, rather than reconstructing samples from shared models. Poisoning attacks on FL are usually mounted via manipulating either training data [11], [12], [13], [6], [23] or local training process [6], [14], [15], [16] of an edge client. We briefly review related research efforts of poisoning attacks on FL in the following.…”
Section: A Poisoning Attacksmentioning
confidence: 99%
“…Distributed-trigger: [18] Coordinated-trigger: [59] Model Poisoning Fully poisoning Scaling based: [16] [22] Constraint based: [16] The backdoor attack is first introduced in FL by Bagdasaryan et al [16]. Since then, backdoor attacks have received widespread attention and became the primary security threat in FL.…”
Section: Single-trigger: [65][16][22][66]mentioning
confidence: 99%
“…In this category of attack, no modification is conducted to modify the features of backdoored samples. On the other hand, artificial backdoor attacks [16,18,59] aim to misclassify any poisoned input containing a backdoor trigger. Note that, these backdoored samples are created by artificially inserting triggers into the clean inputs.…”
Section: Techniques For Data Poisoning Attacksmentioning
confidence: 99%
See 1 more Smart Citation
“…For example, adversarial examples 10,11 trick the model into generating false predictions by adding small perturbations to clean training examples. Backdoor attacks, 12,13 which are more stealthy than adversarial attacks, manipulate training data by injecting samples with backdoor triggers. The model will make incorrect predictions on samples with backdoor triggers but will not affect model performance on otherwise normal data.…”
Section: Introductionmentioning
confidence: 99%