2022
DOI: 10.48550/arxiv.2203.06735
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Private Non-Convex Federated Learning Without a Trusted Server

Abstract: We study differentially private (DP) federated learning (FL) with non-convex loss functions and heterogeneous (non-i.i.d.) client data in the absence of a trusted server, both with and without a secure "shuffler" to anonymize client reports. We propose novel algorithms that satisfy local differential privacy (LDP) at the client level and shuffle differential privacy (SDP) for three classes of Lipschitz continuous loss functions: First, we consider losses satisfying the Proximal Polyak-Łojasiewicz (PL) inequal… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
4
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
3

Relationship

1
2

Authors

Journals

citations
Cited by 3 publications
(4 citation statements)
references
References 13 publications
0
4
0
Order By: Relevance
“…A notable example is Abadi et al [1], which developed a differentially-private stochastic gradient descent (SGD) algorithm DP-SGD in the centralized (singlenode) setting. More recently, several differentially-private algorithms [30,63,55,46] are proposed for the more general distributed (n-node) setting suitable for FL. In this paper, we also follow the DP approach to preserve privacy.…”
Section: Motivation: Privacy-utility-communication Trade-offsmentioning
confidence: 99%
See 1 more Smart Citation
“…A notable example is Abadi et al [1], which developed a differentially-private stochastic gradient descent (SGD) algorithm DP-SGD in the centralized (singlenode) setting. More recently, several differentially-private algorithms [30,63,55,46] are proposed for the more general distributed (n-node) setting suitable for FL. In this paper, we also follow the DP approach to preserve privacy.…”
Section: Motivation: Privacy-utility-communication Trade-offsmentioning
confidence: 99%
“…However, the communication complexity of CDP-SGD still has room for improvements due to direct compression (Line 5 in Algorithm 1). In particular, if the size of the local dataset m stored on clients is dominating, then CDP-SGD (even if we compute local full gradients as CDP-GD) requires O(m 2 ) communication rounds (see Theorem 1), while previous distributed differentially-private algorithms without communication compression (e.g., Distributed DP-SRM [63], LDP SVRG and LDP SPIDER [46]) only need O(m) communication rounds (see Table 1).…”
mentioning
confidence: 99%
“…Our ADMM formulation (8) shows that this computation burden can be distributed among drivers' cell phones. This distributed optimization/federated learning framework can have other standard advantages of federated learning/distributed systems [73,74]. For example, when proper privacy preserving mechanisms (such as differential privacy [75]) are utilized, we can guarantee the privacy of drivers since they can participate in the optimization procedure without completely sharing their data.…”
Section: Algorithm For Offering Incentives and A Distributed Implemen...mentioning
confidence: 99%
“…Compared with DP non-convex minimization analyses (e.g. [Wang et al, 2019, Hu et al, 2021, Ding et al, 2021b, Lowy et al, 2022), the two noises required to privatize the solution of the min-max problem we consider complicates the analysis and requires careful tuning of η θ and η W . Compared to existing analyses of DP min-max games in [Boob and Guzmán, 2021, Yang et al, 2022, Zhang et al, 2022, which assume that f p¨, wq is convex or PL, dealing with non-convexity is a challenge that requires different optimization techniques.…”
Section: Noisy Dp-sgda For Nonconvex-strongly Concave Min-max Problemsmentioning
confidence: 99%