2019
DOI: 10.48550/arxiv.1910.13659
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Efficient Privacy-Preserving Stochastic Nonconvex Optimization

Abstract: While many solutions for privacy-preserving convex empirical risk minimization (ERM) have been developed, privacy-preserving nonconvex ERM remains under challenging. In this paper, we study nonconvex ERM, which takes the form of minimizing a finite-sum of nonconvex loss functions over a training set. To achieve both efficiency and strong privacy guarantees with efficiency, we propose a differentially-private stochastic gradient descent algorithm for nonconvex ERM, and provide a tight analysis of its privacy an… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
28
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
1

Relationship

1
4

Authors

Journals

citations
Cited by 13 publications
(28 citation statements)
references
References 33 publications
0
28
0
Order By: Relevance
“…Algorithms such as output-perturbation that perturbs the output of a non-DP algorithm, objective function perturbation that perturbs the objective function [7] and gradient perturbation that adds noise to the gradient in gradient descent algorithms [4,28] have been proposed to solve DP-ERM. We mainly discuss those algorithms that are most related to our problem, i.e., gradient perturbation [2,4,6,30,31,32,37]. Most DP gradient-based algorithms focus on minimizing the convex loss and aim to achieve optimal empirical and population risk bounds under privacy.…”
Section: Related Workmentioning
confidence: 99%
See 3 more Smart Citations
“…Algorithms such as output-perturbation that perturbs the output of a non-DP algorithm, objective function perturbation that perturbs the objective function [7] and gradient perturbation that adds noise to the gradient in gradient descent algorithms [4,28] have been proposed to solve DP-ERM. We mainly discuss those algorithms that are most related to our problem, i.e., gradient perturbation [2,4,6,30,31,32,37]. Most DP gradient-based algorithms focus on minimizing the convex loss and aim to achieve optimal empirical and population risk bounds under privacy.…”
Section: Related Workmentioning
confidence: 99%
“…Recently, DP algorithms have been studied for non-convex loss functions [30,31,32,37]. Since finding the global minimum for non-convex functions is NP-hard, the utility of a DP algorithm is typically measured by the 2 -norm of the gradient.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…where each f i : R d → R is a smooth convex loss function, and g : R d → R is a simple convex (non)-smooth regularizer such as the 1 -norm or 2 -norm regularizer. Many differential privacy algorithms have been proposed to deal with ERM problems, such as DP-SGD [18], DP-SVRG [19], and DP-SRGD [20]. Therefore, this paper mainly considers the generalized ERM problem with more complex regularizers (e.g., g(x) = λ Ax 1 with a given matrix A and a regularized parameter λ > 0), such as graph-guided fused Lasso [21], generalized Lasso [22] and graph-guided support vector machine (SVM) [23].…”
Section: Introductionmentioning
confidence: 99%