2022
DOI: 10.48550/arxiv.2205.04330
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Protecting Data from all Parties: Combining FHE and DP in Federated Learning

Abstract: This paper tackles the problem of ensuring training data privacy in a federated learning context. Relying on Homomorphic Encryption (HE) and Differential Privacy (DP), we propose a framework addressing threats on the privacy of the training data. Notably, the proposed framework ensures the privacy of the training data from all actors of the learning process, namely the data owners and the aggregating server. More precisely, while HE blinds a semi-honest server during the learning protocol, DP protects the data… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
5
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(5 citation statements)
references
References 24 publications
0
5
0
Order By: Relevance
“…By leveraging the strengths of each technique, it becomes possible to mitigate their respective drawbacks and achieve enhanced privacy protection. HE can amplify the privacy offered by DP to protect the updates from all the parties, as in Sébert et al [74]. While HE protects the intermediate updates from the server, DP also ensures the final model remains secure, preventing adversaries from performing model inversion attacks.…”
Section: Discussion and Learned Lessonsmentioning
confidence: 99%
See 2 more Smart Citations
“…By leveraging the strengths of each technique, it becomes possible to mitigate their respective drawbacks and achieve enhanced privacy protection. HE can amplify the privacy offered by DP to protect the updates from all the parties, as in Sébert et al [74]. While HE protects the intermediate updates from the server, DP also ensures the final model remains secure, preventing adversaries from performing model inversion attacks.…”
Section: Discussion and Learned Lessonsmentioning
confidence: 99%
“…However, this amalgamation is far from straightforward, necessitating a careful equilibrium between computational complexity, model precision, and privacy considerations. As suggested in the work of Sébert et al [74], combining these two techniques has the potential to safeguard raw data across all participants in the FL process, thereby showcasing a direction for future exploration.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…One technique is differential privacy [14], which adds appropriate noise to shared parameters according to the desired privacy level. For example, [15] added Laplace noise to the gradients and selectively shared the perturbed gradients, [16], [17] presented a client-sided differential privacy federated learning scheme to hide clients' model contributions during training. To protect local models, the added noise to each local model must be big enough, resulting in the aggregate noise corresponding to the aggregate model being too large, which would completely destroy the utility of this model.…”
Section: Related Workmentioning
confidence: 99%
“…No collusion among the server and clients participating in the federated learning protocol was assumed as the keys (sk, pk) necessary for the homomorphic encryption and the signatures are generated by one of the clients and shared among all clients. In the work [17], to deal with the problem of collusion in [29], adding Gaussian noise to the local models before homomorphically encryption was proposed. However, the standard variation of the additive Gaussian noise must be small to not destroy the genuine local models, resulting in the fact that the adding noise protection is not able to provide a high level of differential privacy (ε is not small, i.e., less than 1).…”
Section: Related Workmentioning
confidence: 99%