2023
DOI: 10.1109/access.2023.3238823
|View full text |Cite
|
Sign up to set email alerts
|

Poisoning Attacks in Federated Learning: A Survey

Abstract: Federated learning faces many security and privacy issues. Among them, poisoning attacks can significantly impact global models, and malicious attackers can prevent global models from converging or even manipulating the prediction results of global models. Defending against poisoning attacks is a very urgent and challenging task. However, the systematic reviews of poisoning attacks and their corresponding defense strategies from a privacy-preserving perspective still need more effort. This survey provides an i… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 29 publications
(6 citation statements)
references
References 78 publications
0
1
0
Order By: Relevance
“…However, like in centralized ML deployments, FL is subject to various types of security attacks, including Byzantine clients, which do not behave as expected. Specifically, poisoning attacks are intended to modify the data or directly the weights shared by clients with the aggregator to degrade the model's performance and hinder its convergence Xia, Chen, Yu, and Ma (2023). Indeed, previous works confirm FL's susceptibility to poisoning attacks Baruch, Baruch, and Goldberg (2019), particularly highlighting the vulnerability of the average function or FedAvg McMahan et al (2017) as an aggregation approach.…”
Section: Introductionmentioning
confidence: 99%
“…However, like in centralized ML deployments, FL is subject to various types of security attacks, including Byzantine clients, which do not behave as expected. Specifically, poisoning attacks are intended to modify the data or directly the weights shared by clients with the aggregator to degrade the model's performance and hinder its convergence Xia, Chen, Yu, and Ma (2023). Indeed, previous works confirm FL's susceptibility to poisoning attacks Baruch, Baruch, and Goldberg (2019), particularly highlighting the vulnerability of the average function or FedAvg McMahan et al (2017) as an aggregation approach.…”
Section: Introductionmentioning
confidence: 99%
“…In our work, we evaluate various approaches for combining local RF models for cybersecurity applications. In addition, differential privacy property is added to our model making it private by design and providing a countermeasure to potential data poisoning attacks [53]. To achieve that, we decided to follow the work from [17] where differential privacy is done with the Exponential mechanism giving strong privacy guarantees and high accuracy performances in practice.…”
Section: Random Forest With Differential Privacymentioning
confidence: 99%
“…A recent survey in the field highlights the most advanced schemes of federated learning poisoning attacks and defenses [20]. They classify the defense strategies against poisoning attacks in federated learning into three categories: model analysis, byzantine robust aggregation and verification-based.…”
Section: Related Workmentioning
confidence: 99%