2016
DOI: 10.1007/978-3-319-45381-1_10
|View full text |Cite
|
Sign up to set email alerts
|

On-Average KL-Privacy and Its Equivalence to Generalization for Max-Entropy Mechanisms

Abstract: We define On-Average KL-Privacy and present its properties and connections to differential privacy, generalization and informationtheoretic quantities including max-information and mutual information. The new definition significantly weakens differential privacy, while preserving its minimalistic design features such as composition over small group and multiple queries as well as closeness to post-processing. Moreover, we show that On-Average KL-Privacy is equivalent to generalization for a large class of comm… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
33
0

Year Published

2017
2017
2022
2022

Publication Types

Select...
5
2
1
1

Relationship

1
8

Authors

Journals

citations
Cited by 38 publications
(33 citation statements)
references
References 32 publications
0
33
0
Order By: Relevance
“…A machine learning model is said to overfit to its training data when its performance on unseen test data diverges from the performance observed during training, i.e., its generalization error is large. The relationship between privacy risk and overfitting is further supported by recent results that suggest the contrapositive, i.e., under certain reasonable assumptions, differential privacy [13] and related notions of privacy [18,19] imply good generalization. However, a precise account of the connection between overfitting and the risk posed by different types of attack remains unknown.…”
Section: Introductionmentioning
confidence: 54%
See 1 more Smart Citation
“…A machine learning model is said to overfit to its training data when its performance on unseen test data diverges from the performance observed during training, i.e., its generalization error is large. The relationship between privacy risk and overfitting is further supported by recent results that suggest the contrapositive, i.e., under certain reasonable assumptions, differential privacy [13] and related notions of privacy [18,19] imply good generalization. However, a precise account of the connection between overfitting and the risk posed by different types of attack remains unknown.…”
Section: Introductionmentioning
confidence: 54%
“…In particular, Bassily et al [18] studied a notion of privacy called total variation stability and proved good generalization with respect to a bounded number of adaptively chosen low-sensitivity queries. Moreover, for data drawn from Gibbs distributions, Wang et al [19] showed that on-average KL privacy is equivalent to generalization error as defined in this paper. While these results give evidence for the relationship between privacy and overfitting, we construct an attacker that directly leverages overfitting to gain advantage commensurate with the extent of the overfitting.…”
Section: Privacy and Machine Learningmentioning
confidence: 84%
“…In [19], the authors announce that  also needs tend to 0 in some rates to keep generalization which matches our result.…”
Section: Discussionmentioning
confidence: 42%
“…Finally, pDP is related to random differential privacy (Hall et al, 2013) and on-average KL-privacy (Wang et al, 2016). They respectively measure the high-probability and expected privacy loss when z and the data points in Z are drawn i.i.d.…”
Section: Related Workmentioning
confidence: 99%