2022
DOI: 10.48550/arxiv.2203.00263
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Private Convex Optimization via Exponential Mechanism

Abstract: In this paper, we study private optimization problems for non-smooth convex functionsWe show that modifying the exponential mechanism by adding an ℓ 2 2 regularizer to F (x) and sampling from π(x) ∝ exp(−k(F (x) + µ x 2 2 /2)) recovers both the known optimal empirical risk and population loss under (ε, δ)-DP. Furthermore, we show how to implement this mechanism using O(n min(d, n)) queries to f i (x) for the DP-SCO where n is the number of samples/users and d is the ambient dimension. We also give a (nearly) m… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
15
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
4

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(16 citation statements)
references
References 13 publications
(20 reference statements)
1
15
0
Order By: Relevance
“…This result holds even when F is a distribution over G-Lipschitz functions, and we only have sample access to this distribution. This extends a similar implementation of the marginal sampler required by [LST21b] for log-Lipschitz densities in the ℓ 2 norm, given by [GLL22]. The remaining complexity of the marginal sampling depends on the structure of the chosen ϕ and X , but is independent of F ; we give a discussion of this aspect of our sampler in Sections 5.3 and 6.…”
Section: Our Resultsmentioning
confidence: 75%
See 1 more Smart Citation
“…This result holds even when F is a distribution over G-Lipschitz functions, and we only have sample access to this distribution. This extends a similar implementation of the marginal sampler required by [LST21b] for log-Lipschitz densities in the ℓ 2 norm, given by [GLL22]. The remaining complexity of the marginal sampling depends on the structure of the chosen ϕ and X , but is independent of F ; we give a discussion of this aspect of our sampler in Sections 5.3 and 6.…”
Section: Our Resultsmentioning
confidence: 75%
“…π(x | y) or π(y | x), mixes rapidly. We give an extended discussion on recent activity on designing and harnessing proximal samplers building upon [LST21b] in Section 1.3, but mention that instantiations of the framework have resulted in state-of-the-art runtimes for many structured density families [CCSW22,LC22,GLL22]. Motivated by the success of proximal methods in the Euclidean setting, one goal of our work is to extend this technique to non-Euclidean geometries.…”
Section: Introductionmentioning
confidence: 99%
“…In a concurrent, independent, and complementary work on convex losses, Gopi, Lee, and Liu [GLL22] showed that the stationary distribution of a Metropolis-Hastings style process provides the optimal algorithm both for DP-SCO and DP-ERM under (ε, δ)-DP. It is an interesting open question to understand the necessity of the two-phase utility analysis of DP-Langevin diffusion style algorithms.…”
Section: Our Results At a Glancementioning
confidence: 99%
“…The problem of obtaining sampling lower bounds is a notorious open problem raised in many prior works (see, e.g., Cheng et al, 2018b;Ge et al, 2020;Lee et al, 2021;Chatterji et al, 2022). So far, unconditional lower bounds have only been obtained in restricted settings such as in dimension 1; see Chewi et al (2022c) and the discussion therein, as well as the reduction to optimization in Gopi et al (2022). Our lower bounds are the first of their kind for Fisher information guarantees, and are some of the only lower bounds for sampling in general.…”
Section: Introductionmentioning
confidence: 86%