2021
DOI: 10.1093/comjnl/bxab192
|View full text |Cite
|
Sign up to set email alerts
|

Defending Against Data Poisoning Attacks: From Distributed Learning to Federated Learning

Abstract: Federated learning (FL), a variant of distributed learning (DL), supports the training of a shared model without accessing private data from different sources. Despite its benefits with regard to privacy preservation, FL’s distributed nature and privacy constraints make it vulnerable to data poisoning attacks. Existing defenses, primarily designed for DL, are typically not well adapted to FL. In this paper, we study such attacks and defenses. In doing so, we start from the perspective of DL and then give consi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 10 publications
(9 citation statements)
references
References 45 publications
0
9
0
Order By: Relevance
“…Although the GAN-based methods are promising in solving the non-IID problem, the high computational resource demands for training the generator and discriminator limit their applicability in practical systems. Besides, VHL [37] instituted a virtual homogeneous dataset to calibrate the features from the heterogeneous clients, which can be generated from pure noise shared across clients. Moreover, zero-shot learning was also applied for synthetic data generation to promote the fairness of FL [38], but only knowing the global model is incapable of generating synthetic data with sufficient quality.…”
Section: Non-iid Problem In Flmentioning
confidence: 99%
“…Although the GAN-based methods are promising in solving the non-IID problem, the high computational resource demands for training the generator and discriminator limit their applicability in practical systems. Besides, VHL [37] instituted a virtual homogeneous dataset to calibrate the features from the heterogeneous clients, which can be generated from pure noise shared across clients. Moreover, zero-shot learning was also applied for synthetic data generation to promote the fairness of FL [38], but only knowing the global model is incapable of generating synthetic data with sufficient quality.…”
Section: Non-iid Problem In Flmentioning
confidence: 99%
“…In recent years, some defense schemes have been studied to against data poison attack that can delete outliers by calculating the similarity between data information [58]. In the aspect of this way, Tian et al [59] proposed a strategy for detecting and suppressing potential outliers to defend against data poisoning attacks in FL. In the FL scenario of traffic flow prediction, Qi et al [60] came up with a FL framework which using consortium blockchain technique.…”
Section: Homomorphic Encryptionmentioning
confidence: 99%
“…Xingyu Li et al [25] proposed LoMar, a two-phase defence algorithm that improves target label accuracy testing under label rollover strike on the Amazon dataset from 96.0 percent to 98.8 percent and total average precision from 90.1 percent to 97.0 percent, in comparison to FG+Krum. Yuchen Tian et al [26] proposed a defence against data poisoning assaults in FL situations that detects and suppresses potential outliers (DSPO), which outperformed existing defences in numerous cases. V. Tolpegin et al [27] introduced a FL system aggregator that performs gradient clustering before the summary arguments are updated per round.…”
Section: Data Poisoning Defencementioning
confidence: 99%
“…Assuming the server is trusted, defenses focus on detection of incorrectly updated parameters. There are two usual methods of detecting anomalous update parameters [26]. One is by precision testing.…”
Section: Model Poisoning Defencementioning
confidence: 99%