2017
DOI: 10.48550/arxiv.1712.07557
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Differentially Private Federated Learning: A Client Level Perspective

Abstract: Federated learning is a recent advance in privacy protection. In this context, a trusted curator aggregates parameters optimized in decentralized fashion by multiple clients. The resulting model is then distributed back to all clients, ultimately converging to a joint representative model without explicitly having to share the data. However, the protocol is vulnerable to differential attacks, which could originate from any party contributing during federated optimization. In such an attack, a client's contribu… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

1
303
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 160 publications
(304 citation statements)
references
References 4 publications
1
303
0
Order By: Relevance
“…Some data still have to be sent in an aggregated form for billing, but these data do not reveal many details. Techniques such as secure aggregation [25] and differential privacy [26] are being explored to enforce trust requirements.…”
Section: B Federated Learningmentioning
confidence: 99%
“…Some data still have to be sent in an aggregated form for billing, but these data do not reveal many details. Techniques such as secure aggregation [25] and differential privacy [26] are being explored to enforce trust requirements.…”
Section: B Federated Learningmentioning
confidence: 99%
“…DP is used to encrypt the response for each query in the SMC. Geyer et al [9] introduce a DP algorithm focusing on removing the data source info. In addition to using the same SGD framework as DPSGD, the algorithm also will randomly ignore a portion of the data to protect data privacy.…”
Section: Differential Privacy In Federated Learningmentioning
confidence: 99%
“…Any inappropriate setting of fixed privacy parameters will bring in vulnerability to gradient leakage attacks. [29], [50] and DP-baseline with clipping bound C = S = 4 and noise scale σ = 6 as in [24], [27]. It is observed that an adequate amount of noise is necessary to mask the gradients from malicious or curious inference attackers using the reconstruction learning on the leaked gradients.…”
Section: Baseline With Fixed Parametersmentioning
confidence: 99%
“…Inherent Limitations of Baseline. Inspired by the pioneer work [24], many proposals in the literature [25], [27], [29] and open source community [30], [31] employ the fixed privacy parameter strategy to decide clipping method, define the sensitivity of gradient updates and noise scale, which results in constant noise injection throughout every step of the entire training process. Although such a rigid setting of privacy parameters has shown reasonable accuracy while providing a certain level of differential privacy guarantee, they suffer some inherent limitations.…”
Section: Introductionmentioning
confidence: 99%