Proceedings 2022 Network and Distributed System Security Symposium 2022
DOI: 10.14722/ndss.2022.23156
|View full text |Cite
|
Sign up to set email alerts
|

DeepSight: Mitigating Backdoor Attacks in Federated Learning Through Deep Model Inspection

Abstract: Federated Learning (FL) allows multiple clients to collaboratively train a Neural Network (NN) model on their private data without revealing the data. Recently, several targeted poisoning attacks against FL have been introduced. These attacks inject a backdoor into the resulting model that allows adversarycontrolled inputs to be misclassified. Existing countermeasures against backdoor attacks are inefficient and often merely aim to exclude deviating models from the aggregation. However, this approach also remo… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

2
37
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 44 publications
(44 citation statements)
references
References 6 publications
(16 reference statements)
2
37
0
Order By: Relevance
“…It should be noted that if the MA is less than 100%, misclassifications of the model can be counted in favor of the backdoor, especially if the model wrongly predicts the backdoor target. As already pointed out by Rieger et al [35], this phenomenon primarily occurs for image scenarios with pixel-based triggers. It causes the BA to be slightly higher than 0% for backdoor-free models.…”
Section: Overall Performancementioning
confidence: 55%
See 2 more Smart Citations
“…It should be noted that if the MA is less than 100%, misclassifications of the model can be counted in favor of the backdoor, especially if the model wrongly predicts the backdoor target. As already pointed out by Rieger et al [35], this phenomenon primarily occurs for image scenarios with pixel-based triggers. It causes the BA to be slightly higher than 0% for backdoor-free models.…”
Section: Overall Performancementioning
confidence: 55%
“…Image Classification (IC): We use the popular benchmark datasets MNIST, FMNIST, and CIFAR-10 in our experiments. As these datasets are frequently used for evaluating FL and backdoor attacks and defenses [3], [8], [14], [15], [16], [20], [23], [27], [30], [35], [42], [43], [44], [13], [34], it enables us to perform an equitable comparative analysis of our approach with other state-of-the-art approaches in the literature. All three consist of samples belonging to one out of ten classes, handwritten digits in the case of MNIST, articles of clothing in the case of FMNIST, and objects (airplanes, cars, birds, etc.)…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Federated learning is susceptible to poisoning attacks by malicious participants, who may tamper with the local data or the local model parameters on the compromised clients. There has been substantial progress in recent years on modeling possible attacks and proposing defense methods against data poisoning [68] and model poisoning [22], [57], [62], [16]. Existing defenses can be adapted and integrated in CELEST to counter malicious interference during the federated learning training process.…”
Section: Discussion and Extensionsmentioning
confidence: 99%
“…Existing defenses against such owners use Byzantine-robust aggregation rules such as trimmed mean [82], coordinate-wise mean [81] and Krum [11], which have been show to be susceptible to backdoor and model poisoning attacks [34]. Recent work in FL such as FLTrust [16] and DeepSight [71] provide mitigation against backdoor attacks. Both strategies are inherently heuristic, while SafeNet offers provable robustness guarantees.…”
Section: F Comparison With Federated Learningmentioning
confidence: 99%