2022 IEEE Wireless Communications and Networking Conference (WCNC) 2022
DOI: 10.1109/wcnc51071.2022.9771619
|View full text |Cite
|
Sign up to set email alerts
|

Defense Strategies Toward Model Poisoning Attacks in Federated Learning: A Survey

Abstract: Due to the greatly improved capabilities of devices, massive data, and increasing concern about data privacy, Federated Learning (FL) has been increasingly considered for applications to wireless communication networks (WCNs). Wireless FL (WFL) is a distributed method of training a global deep learning model in which a large number of participants each train a local model on their training datasets and then upload the local model updates to a central server. However, in general, nonindependent and identically … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
2
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 14 publications
(4 citation statements)
references
References 202 publications
0
4
0
Order By: Relevance
“…As argued in Sun et al (2019b), norm-bounding is thought to be sufficient to prevent these attacks. We acknowledge that other defenses exist, see overviews in Wang et al (2022) and Qiu et al (2022), yet the proposed attack is designed to be used against norm-bounded FL systems and we verify in Appendix A.5 that it does not break other defenses. We focus on norm bounding because it is a key defense that is widely adopted in industrial implementations of federated learning (Bonawitz et al, 2019;Paulik et al, 2021;Dimitriadis et al, 2022).…”
Section: Can You Backdoor Federated Learning?mentioning
confidence: 87%
“…As argued in Sun et al (2019b), norm-bounding is thought to be sufficient to prevent these attacks. We acknowledge that other defenses exist, see overviews in Wang et al (2022) and Qiu et al (2022), yet the proposed attack is designed to be used against norm-bounded FL systems and we verify in Appendix A.5 that it does not break other defenses. We focus on norm bounding because it is a key defense that is widely adopted in industrial implementations of federated learning (Bonawitz et al, 2019;Paulik et al, 2021;Dimitriadis et al, 2022).…”
Section: Can You Backdoor Federated Learning?mentioning
confidence: 87%
“…However, the poisoning attack is one of the many security vulnerabilities they discussed and is only briefly described in their paper. Zhilin et al [12] discussed defense strategies against model poisoning attacks in federated learning. They classified existing defense strategies into two categories: evaluation methods for local model updates and aggregation methods for the global model.…”
Section: Related Workmentioning
confidence: 99%
“…Therefore, federated learning is considered effective in protecting user privacy during training. However, further research shows that federated learning also faces many security and privacy risks [7], [8], [9], [10], such as poisoning attacks [11], [12], [13], [14], [15] and privacy leakage [16], [17], [18], [19], [20], [21]. This paper focuses on the risk of poisoning attacks in federated learning.…”
Section: Introductionmentioning
confidence: 99%
“…Major contents [20] early work that concludes data poisoning, model poisoning, and defense [21] problems of communication, poisoning attacks, inference attacks and privacy leakage [22] the concept of semi-supervised federated learning and applications [23] defenses against model poisoning and privacy inference attacks [24] block chain-based privacy protection for federated learning [25] privacy protection classification of federated learning and the defenses [26] federated learning privacy protection, communication overhead, and malicious participant defenses [27] defense methods for model poisoning [28] federated learning privacy protection convergence programme [29] survey and evaluation of federated learning privacy attacks and defenses programs [30] federated learning robustness, privacy attacks, and defenses…”
Section: Refmentioning
confidence: 99%