Federated learning faces many security and privacy issues. Among them, poisoning attacks can significantly impact global models, and malicious attackers can prevent global models from converging or even manipulating the prediction results of global models. Defending against poisoning attacks is a very urgent and challenging task. However, the systematic reviews of poisoning attacks and their corresponding defense strategies from a privacy-preserving perspective still need more effort. This survey provides an in-depth and up-to-date overview of poisoning attacks and corresponding defense strategies in federated learning. We first classify the poisoning attacks according to their methods and targets. Next, we analyze the differences and connections between the various categories of poisoning attacks. In addition, we classify the defense strategies against poisoning attacks in federated learning into three categories and analyze their advantages and disadvantages. Finally, we discuss the privacy protection problem in poisoning attacks and their countermeasure and propose potential research directions from the perspective of attack and defense, respectively.INDEX TERMS Federated learning, distributed machine learning, poisoning attacks, defense of poisoning attacks.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.