2019 IEEE International Conference on Big Data (Big Data) 2019
DOI: 10.1109/bigdata47090.2019.9005465
|View full text |Cite
|
Sign up to set email alerts
|

Federated Learning with Bayesian Differential Privacy

Abstract: We consider the problem of reinforcing federated learning with formal privacy guarantees. We propose to employ Bayesian differential privacy, a relaxation of differential privacy for similarly distributed data, to provide sharper privacy loss bounds. We adapt the Bayesian privacy accounting method to the federated setting and suggest multiple improvements for more efficient privacy budgeting at different levels. Our experiments show significant advantage over the state-of-the-art differential privacy bounds fo… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
86
0
2

Year Published

2020
2020
2022
2022

Publication Types

Select...
4
4

Relationship

0
8

Authors

Journals

citations
Cited by 145 publications
(108 citation statements)
references
References 35 publications
0
86
0
2
Order By: Relevance
“…In [27], the authors study the privacy risk of FL, and propose a GAN-based attack that allows servers to target a specific client and compromise the client level privacy. Triastcyn and Faltings [28] introduce Bayesian differential privacy, a natural relaxation of differential privacy that provides better privacy guarantees for clients. The main idea is based on a fact that FL tasks are often focused on a particular type of data.…”
Section: B Privacy and Security Of Flmentioning
confidence: 99%
“…In [27], the authors study the privacy risk of FL, and propose a GAN-based attack that allows servers to target a specific client and compromise the client level privacy. Triastcyn and Faltings [28] introduce Bayesian differential privacy, a natural relaxation of differential privacy that provides better privacy guarantees for clients. The main idea is based on a fact that FL tasks are often focused on a particular type of data.…”
Section: B Privacy and Security Of Flmentioning
confidence: 99%
“…Although the local raw data is not exposed in FL setting, FL on its own still lacks theoretical privacy guarantees [33], and may leak sensitive information about the training data [36]. Therefore, the combination of FL and proper privacy-preserving mechanisms, such as DP [14], HE [30], MPC [17], etc., is a necessity to alleviate FL's privacy risks.…”
Section: Related Workmentioning
confidence: 99%
“…McMahan et al [26] proposed DP-FedAvg, a differentially private version of vanilla FedAvg. Triastcyn and Faltings [33] proposed Bayesian differential privacy, a relaxation of DP for FL with a tighter privacy budget so that FL task over population with similarly distributed data could converge faster than DP-FedAvg. Unlike the existing methods providing gradient-level perturbation, our method focuses on IR perturbation within each multi-party SGD iteration, which is unique in VFL.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Around the same time of our work, Triastcyn et al [40] independently propose Bayesian differential privacy that takes into account both of the sources of randomness. Despite this similarity, our works differ in multiple dimensions.…”
Section: Related Workmentioning
confidence: 99%