2023
DOI: 10.48550/arxiv.2302.03884
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

DIFF2: Differential Private Optimization via Gradient Differences for Nonconvex Distributed Learning

Abstract: Differential private optimization for nonconvex smooth objective is considered. In the previous work, the best known utility bound is O( √ d/(nε DP )) in terms of the squared full gradient norm, which is achieved by Differential Private Gradient Descent (DP-GD) as an instance, where n is the sample size, d is the problem dimensionality and ε DP is the differential privacy parameter. To improve the best known utility bound, we propose a new differential private optimization framework called DIFF2 (DIFFerential … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Publication Types

Select...

Relationship

0
0

Authors

Journals

citations
Cited by 0 publications
references
References 42 publications
0
0
0
Order By: Relevance

No citations

Set email alert for when this publication receives citations?