2018 IEEE 59th Annual Symposium on Foundations of Computer Science (FOCS) 2018
DOI: 10.1109/focs.2018.00056
|View full text |Cite
|
Sign up to set email alerts
|

Privacy Amplification by Iteration

Abstract: Many commonly used learning algorithms work by iteratively updating an intermediate solution using one or a few data points in each iteration. Analysis of differential privacy for such algorithms often involves ensuring privacy of each step and then reasoning about the cumulative privacy cost of the algorithm. This is enabled by composition theorems for differential privacy that allow releasing of all the intermediate results. In this work, we demonstrate that for contractive iterations, not releasing the inte… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

3
172
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
4
2
1

Relationship

1
6

Authors

Journals

citations
Cited by 97 publications
(178 citation statements)
references
References 26 publications
(21 reference statements)
3
172
0
Order By: Relevance
“…Privacy amplification plays a major role in the design of differentially private mechanisms. These include amplification by subsampling [22] and by iteration [16], and the recent seminal work on amplification via shuffling by Erlingsson et al [14]. In particular, Erlingsson et al considered a setting more general than ours which allows for interactive protocols in the shuffle model by first generating a random permutation of the users' inputs and then sequentially applying a (possibly different) local randomizer to each element in the permuted vector.…”
Section: Privacy Amplification By Shufflingmentioning
confidence: 99%
“…Privacy amplification plays a major role in the design of differentially private mechanisms. These include amplification by subsampling [22] and by iteration [16], and the recent seminal work on amplification via shuffling by Erlingsson et al [14]. In particular, Erlingsson et al considered a setting more general than ours which allows for interactive protocols in the shuffle model by first generating a random permutation of the users' inputs and then sequentially applying a (possibly different) local randomizer to each element in the permuted vector.…”
Section: Privacy Amplification By Shufflingmentioning
confidence: 99%
“…We start by describing a general version of noisy SGD and analyze its privacy using the privacy amplification by iteration technique from [15]. Recall that in our problem we are given a family of convex loss functions over some convex set K ⊆ R parameterized by ∈ X, that is ( , ) is convex and differentiable in the first parameter for every ∈ X.…”
Section: Dp Sco Via Privacy Amplification By Iterationmentioning
confidence: 99%
“…The analysis of this algorithm relies on two tools. The first one is privacy amplification by iteration [15]. This privacy amplification technique ensures that for the purposes of analyzing the privacy guarantees of a point used at step one can effectively treat all the noise added at subsequent steps as also added to the gradient of the loss at .…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Parametrizing the privacy oracle P δ in terms of a fixed δ stems from the convention that ε is considered the most important privacy parameter 3 , whereas δ is chosen to be a negligibly small value (δ 1/n). This choice is also aligned with recent uses of DP in machine learning where the privacy analysis is conducted under the framework of Rényi DP [39] and the reported privacy is obtained a posteriori by converting the guarantees to standard (ε, δ)-DP for some fixed δ [1,18,22,38,48]. In particular, in our experiments with gradient perturbation for stochastic optimization methods (Sec.…”
Section: Remark 1 (Privacy Oracle)mentioning
confidence: 99%