Proceedings of the 51st Annual ACM SIGACT Symposium on Theory of Computing 2019
DOI: 10.1145/3313276.3316377
|View full text |Cite
|
Sign up to set email alerts
|

Private selection from private candidates

Abstract: Differentially Private algorithms often need to select the best amongst many candidate options. Classical works on this selection problem require that the candidates' goodness, measured as a real-valued score function, does not change by much when one person's data changes. In many applications such as hyperparameter optimization, this stability assumption is much too strong. In this work, we consider the selection problem under a much weaker stability assumption on the candidates, namely that the score functi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

2
54
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
2
2

Relationship

0
9

Authors

Journals

citations
Cited by 51 publications
(56 citation statements)
references
References 31 publications
2
54
0
Order By: Relevance
“…3) finally, one could also run all the algorithms in parallel and choose the best performing one. Here we will have to account for the loss in privacy budget; see Liu and Talwar [2019] for example.…”
Section: Methodsmentioning
confidence: 99%
“…3) finally, one could also run all the algorithms in parallel and choose the best performing one. Here we will have to account for the loss in privacy budget; see Liu and Talwar [2019] for example.…”
Section: Methodsmentioning
confidence: 99%
“…Other proposed mechanisms for private selection are the large margin mechanism that guarantees approximate differential privacy [26], or the more scalable subsampled exponential mechanism [27]. Other studies propose considering the quality functions themselves differentially private for cases where one cannot use the exponential mechanism [28], or a generalization of the exponential mechanism to handle quality functions of varying sensitivity [29]. In contrast, while our new mechanism also tackles differentially private selection, it is unique in the use of RR to increase the accuracy of the output.…”
Section: Differential Privacy Selectionmentioning
confidence: 99%
“…31 The number of mixture components K was set to 10 for data with fewer dimensions (<20) and to 20 for data with more dimensions (R20). If necessary, this number, along with hyperparameters of DPVI, could be optimized under DP 32 , with potentially significant extra computational cost.…”
Section: Mixture Modelmentioning
confidence: 99%