Federated learning (FL) has nourished a promising method for data silos, which enables multiple participants to construct a joint model collaboratively without centralizing data. The security and privacy considerations of FL are focused on ensuring the robustness of the global model and the privacy of participants’ information. However, the FL paradigm is under various security threats from the adversary aggregator and participants. Therefore, it is necessary to comprehensively identify and classify potential threats to provide a theoretical basis for FL with security guarantees. In this paper, a unique classification of attacks, which reviews state-of-the-art research on security and privacy issues for FL, is constructed from the perspective of malicious threats based on different computing parties. Specifically, we categorize attacks with respect to performed by aggregator and participant, highlighting the Deep Gradients Leakage attacks and Generative Adversarial Networks attacks. Following an overview of attack methods, we discuss the primary mitigation techniques against security risks and privacy breaches, especially the application of blockchain and Trusted Execution Environments. Finally, several promising directions for future research are discussed.
Trustworthy artificial intelligence (AI) technology has revolutionized daily life and greatly benefited human society. Among various AI technologies, Federated Learning (FL) stands out as a promising solution for diverse real-world scenarios, ranging from risk evaluation systems in finance to cutting-edge technologies like drug discovery in life sciences. However, challenges around data isolation and privacy threaten the trustworthiness of FL systems. Adversarial attacks against data privacy, learning algorithm stability, and system confidentiality are particularly concerning in the context of distributed training in federated learning. Therefore, it is crucial to develop FL in a trustworthy manner, with a focus on security, robustness, and privacy. In this survey, we propose a comprehensive roadmap for developing trustworthy FL systems and summarize existing efforts from three key aspects: security, robustness, and privacy. We outline the threats that pose vulnerabilities to trustworthy federated learning across different stages of development, including data processing, model training, and deployment. To guide the selection of the most appropriate defense methods, we discuss specific technical solutions for realizing each aspect of Trustworthy FL (TFL). Our approach differs from previous work that primarily discusses TFL from a legal perspective or presents FL from a high-level, non-technical viewpoint.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.