2021
DOI: 10.48550/arxiv.2110.11578
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

PRECAD: Privacy-Preserving and Robust Federated Learning via Crypto-Aided Differential Privacy

Abstract: Federated Learning (FL) allows multiple participating clients to train machine learning models collaboratively by keeping their datasets local and only exchanging model updates. Existing FL protocol designs have been shown to be vulnerable to attacks that aim to compromise data privacy and/or model robustness. Recently proposed defenses focused on ensuring either privacy or robustness, but not both. In this paper, we develop a framework called PRECAD, which simultaneously achieves differential privacy (DP) and… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3

Citation Types

0
1
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 50 publications
(63 reference statements)
0
1
0
Order By: Relevance
“…Gu et al in 2021 [73] proposed PRECAD, a framework for FL via crypto-aided differential privacy. This framework achieves differential privacy and uses cryptography against poisoning attacks.…”
Section: Related Workmentioning
confidence: 99%
“…Gu et al in 2021 [73] proposed PRECAD, a framework for FL via crypto-aided differential privacy. This framework achieves differential privacy and uses cryptography against poisoning attacks.…”
Section: Related Workmentioning
confidence: 99%
“…The studies described here assume that a single server is used for the aggregation. Another line of studies uses multiple servers to improve scalability and/or resilience against Byzantine attacks [33], [34], [35], [36]. These studies use secure aggregation methods for a single server and thus BREA and BREA-SV are able to be extended for multiple servers like these studies.…”
Section: Related Workmentioning
confidence: 99%
“…A plausible explanation for the gap in literature on decentralized marketplaces that provide DP is that the DP paradigm is inherently centralized, in the sense that it assumes the existence of a central curator who receives all the data, performs computations over it, and adds noise to the outputs before disclosing them. Solutions for providing DP in a decentralized manner, including approaches based on replacing the central curator by MPC protocols that are run by distributed servers, have recently been proposed by us and others for training convolutional neural networks (CNNs) [23,41,44], decision trees [41], linear support vector machines [41], logistic regression models [34], and even for generating synthetic data [35]. While providing both input and output privacy, these methods were proposed outside of the context of data marketplaces.…”
Section: Related Workmentioning
confidence: 99%