2018
DOI: 10.48550/arxiv.1812.03224
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

A Hybrid Approach to Privacy-Preserving Federated Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
40
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
3
1

Relationship

1
7

Authors

Journals

citations
Cited by 19 publications
(40 citation statements)
references
References 33 publications
0
40
0
Order By: Relevance
“…This is a limitation of the general Federated Learning protocol and is not exclusive to our approach. Recently, Geyer et al [8] and Truex et al [32] introduced frameworks that preserve client-level differential privacy. However, Melis et al demonstrated that privacy guarantees at the client-level are achieved at the expense of model performance and are only effective when the number of clients participating in the aggregation is significantly large, thousands or more [24].…”
Section: Discussionmentioning
confidence: 99%
“…This is a limitation of the general Federated Learning protocol and is not exclusive to our approach. Recently, Geyer et al [8] and Truex et al [32] introduced frameworks that preserve client-level differential privacy. However, Melis et al demonstrated that privacy guarantees at the client-level are achieved at the expense of model performance and are only effective when the number of clients participating in the aggregation is significantly large, thousands or more [24].…”
Section: Discussionmentioning
confidence: 99%
“…Existing Works Homomorphic Encryption [81], [82] DP CDP [7], [45] LDP [18], [83], [84], [85], [86], [87], [88] DDP+Cryptography [21], [89], [90] Secure Multiparty Computation [5], [91] While privacy preservation has been extensively studied in the machine learning community, privacy preservation in federated learning can be more challenging due to sporadic access to power and network connectivity, statistical heterogeneity in the data, etc. Existing works in privacypreserving federated learning typically are mostly developed based on previous privacy-preserving techniques, including: (1) homomorphic encryption, such as Paillier [92], Elgamal [93] and Brakerski-Gentry-Vaikuntanathan cryptosystems [94]; (2) Secure Multiparty Computation (SMC), such as garbled circuits [95] and secret sharing [96]; and (3) differential privacy [97], [98].…”
Section: Privacy-preserving Techniquesmentioning
confidence: 99%
“…The notion of DDP reflects the fact that the required level of noise in the target statistic is sourced from multiple participants [115]. Approaches to DDP that implement an overall additive noise mechanism by summing the same mechanism run at each participant (typically with less noise) necessitates mechanisms with stable distributions-to guarantee proper calibration of known end-to-end response distribution-and cryptography for hiding all but the final result from participants [21], [89], [90], [103], [104], [105], [115]. Stable distributions include Gaussian distribution, Binomial distribution [110], etc, i.e., sum of Gaussian random variables still follow a Gaussian distribution, and sum of Binomial random variables still follow a Binomial distribution.…”
Section: Definition 52 (( δ) Local Differential Privacy) a Randomize...mentioning
confidence: 99%
See 1 more Smart Citation
“…Besides, it does not protect data privacy against the aggregator. Treux et al [52] propose a hybrid approach for privacy-preserving federated learning that leverages DP and secure multi-party computation among collaborators. These works, although provably privacy-preserving, are not robust to poisoning attacks and produce models with undesirably poor privacy-utility tradeoffs [27].…”
Section: Related Workmentioning
confidence: 99%