2013
DOI: 10.1007/978-3-642-40328-6_26
|View full text |Cite
|
Sign up to set email alerts
|

Private Learning and Sanitization: Pure vs. Approximate Differential Privacy

Abstract: We compare the sample complexity of private learning [Kasiviswanathan et al. 2008] and sanitization [Blum et al. 2008] under pure ǫ-differential privacy [Dwork et al. TCC 2006] and approximate (ǫ, δ)-differential privacy [Dwork et al. Eurocrypt 2006]. We show that the sample complexity of these tasks under approximate differential privacy can be significantly lower than that under pure differential privacy.We define a family of optimization problems, which we call Quasi-Concave Promise Problems, that generali… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
150
0

Year Published

2015
2015
2024
2024

Publication Types

Select...
5
2
2

Relationship

2
7

Authors

Journals

citations
Cited by 94 publications
(155 citation statements)
references
References 23 publications
1
150
0
Order By: Relevance
“…This stability of q under small changes in D is usually codified in an assumption that each score function q(i, D) is Lipschitz with respect to Hamming distance 1 changes to D. The Exponential mechanism [26] is an algorithm for DP selection under this assumption and has found numerous applications to the design of DP mechanisms. Several other mechanisms for the private selection problem have been proposed, that improve the utility guarantee under stronger assumptions [3,7,27,28,31,33].…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…This stability of q under small changes in D is usually codified in an assumption that each score function q(i, D) is Lipschitz with respect to Hamming distance 1 changes to D. The Exponential mechanism [26] is an algorithm for DP selection under this assumption and has found numerous applications to the design of DP mechanisms. Several other mechanisms for the private selection problem have been proposed, that improve the utility guarantee under stronger assumptions [3,7,27,28,31,33].…”
Section: Introductionmentioning
confidence: 99%
“…Several generalization of the exponential mechanims have been proposed. Smith and Thakurta [33] and Beimel et al [3] showed that the utility guarantee can be improved using the propose-testrelease framework of Dwork and Lei [11] when there is a large margin between the maximum and the rest. Chaudhuri et al [7] gave an elegant algorithm that can exploit a large margin between the maximum and the kth maximum for any k. Raskhodnikova and Smith [31] proposed the generalized exponential mechanism whose utility depends on the sensitivity of the maximizer, rather than the worst-case sensitivity.…”
Section: Introductionmentioning
confidence: 99%
“…The mechanism A that on input D ∈ U * adds independently generated noise with distribution N (0, σ 2 ) to each of the d output terms of f (D) preserves (ǫ, δ)-differential privacy. [7,19,3] Given a database S ∈ U * , consider the task of choosing a "good" solution out of a possible set of solutions F , and assume that this "goodness" is quantified using a quality function q : U * × F → N assigning "scores" to solutions from F (w.r.t. the given database S).…”
Section: The Framework Of Global Sensitivity [8]mentioning
confidence: 99%
“…Since then, a number of works have improved our understanding of the sample complexity -the minimum number of examples -required by such learners to simultaneously achieve accuracy and privacy. Some of these works showed that privacy incurs an inherent additional cost in sample complexity; that is, some concept classes require more samples to learn privately than they require to learn without privacy [BKN10,CH11,BNS13,FX14,CHS14,BNSV15]. In this work, we address the complementary question of whether there is also a computational price of differential privacy for learning tasks, for which much less is known.…”
Section: Introductionmentioning
confidence: 99%