Proceedings of the 2016 ACM Conference on Innovations in Theoretical Computer Science 2016
DOI: 10.1145/2840728.2840747
|View full text |Cite
|
Sign up to set email alerts
|

Simultaneous Private Learning of Multiple Concepts

Abstract: We investigate the direct-sum problem in the context of differentially private PAC learning: What is the sample complexity of solving k learning tasks simultaneously under differential privacy, and how does this cost compare to that of solving k learning tasks without privacy? In our setting, an individual example consists of a domain element x labeled by k unknown concepts (c1, . . . , c k ). The goal of a multi-learner is to output k hypotheses (h1, . . . , h k ) that generalize the input examples.Without co… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
69
0

Year Published

2016
2016
2022
2022

Publication Types

Select...
4
3
1

Relationship

1
7

Authors

Journals

citations
Cited by 51 publications
(70 citation statements)
references
References 35 publications
(96 reference statements)
1
69
0
Order By: Relevance
“…In contrast, our analysis allows to bound (ε, δ)-DP directly and only requires increasing σ by a factor of max{Õ(k/n), 1}. We note that in the context of PAC learning sample complexity of solving multiple learning problems with differential privacy was studied in [BNS16b]. The question of optimizing multiple loss functions was also studied in [Ull15,FGV17].…”
Section: Multiple Convex Optimizationsmentioning
confidence: 99%
“…In contrast, our analysis allows to bound (ε, δ)-DP directly and only requires increasing σ by a factor of max{Õ(k/n), 1}. We note that in the context of PAC learning sample complexity of solving multiple learning problems with differential privacy was studied in [BNS16b]. The question of optimizing multiple loss functions was also studied in [Ull15,FGV17].…”
Section: Multiple Convex Optimizationsmentioning
confidence: 99%
“…We note that the Ramsey argument in the first part is quite general: it does not use the definition of differential privacy and could perhaps be useful in other sample complexity lower bounds. Also, a similar argument was used by Bun [2016] in a weaker lower bound for privately learning thresholds in the proper case. However, the second and more technical part of the proof is tailored specifically to the definition of differential privacy.…”
Section: Proof Overviewmentioning
confidence: 84%
“…For a large data universe X , the at least linear in |X | running time of BasicHistogram M,X can be prohibitive. By using approximate differential privacy, we can release counts for a smaller number of bins (at most n) based on stability techniques [KKMN09,BNS16]. We present a generalization of the algorithm from [BNS16].…”
Section: Stability-based Histogrammentioning
confidence: 99%