2021
DOI: 10.29012/jpc.755
|View full text |Cite
|
Sign up to set email alerts
|

Differentially private false discovery rate control

Abstract: Differential privacy provides a rigorous framework for privacy-preserving data analysis. This paper proposes the first differentially private procedure for controlling the false discovery rate (FDR) in multiple hypothesis testing. Inspired by the Benjamini-Hochberg procedure (BHq), our approach is to first repeatedly add noise to the logarithms of the p-values to ensure differential privacy and to select an approximately smallest p-value serving as a promising candidate at each iteration; the selected p-values… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2022
2022
2025
2025

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 7 publications
(3 citation statements)
references
References 48 publications
(77 reference statements)
0
3
0
Order By: Relevance
“…Laplace or Gaussian noise) to the test statistic or, more generally, the queried result (Dwork et al, 2015a). While theoretical guarantees are available for differential privacy based methods for test data reuse (e.g., (Dwork et al, 2015b;Russo and Zou, 2016;Rogers et al, 2016;Cummings et al, 2016;Dwork et al, 2017;Steinke, 2017, 2018;Shenfeld and Ligett, 2019;Gossmann et al, 2021) and others), the required size of the test dataset is prohibitively large for many application domains or require injecting very large amounts of noise (Rogers et al, 2019;Gossmann et al, 2021). An alternative approach is to directly limit the number of bits of information released to the model developer by discretizing the queried result along some grid (Blum and Hardt, 2015).…”
Section: Related Workmentioning
confidence: 99%
“…Laplace or Gaussian noise) to the test statistic or, more generally, the queried result (Dwork et al, 2015a). While theoretical guarantees are available for differential privacy based methods for test data reuse (e.g., (Dwork et al, 2015b;Russo and Zou, 2016;Rogers et al, 2016;Cummings et al, 2016;Dwork et al, 2017;Steinke, 2017, 2018;Shenfeld and Ligett, 2019;Gossmann et al, 2021) and others), the required size of the test dataset is prohibitively large for many application domains or require injecting very large amounts of noise (Rogers et al, 2019;Gossmann et al, 2021). An alternative approach is to directly limit the number of bits of information released to the model developer by discretizing the queried result along some grid (Blum and Hardt, 2015).…”
Section: Related Workmentioning
confidence: 99%
“…This means that differentially-private OLS based on these ERM algorithms requires us to devise new versions of these algorithms, making this a second step in this line of work... (After first understanding what we can do using existing algorithms.) We leave this approach -as well as performing private hypothesis testing using a PTR-type algorithm (Dwork & Lei, 2009) (output merely reject / don't-reject decision without justification), or releasing only relevant tests judging by their p-values (Dwork et al, 2015) -for future work.…”
Section: Introductionmentioning
confidence: 99%
“…Discussion. Some works have already looked at the intersection of differentially privacy and statistics (Dwork & Lei, 2009;Smith, 2011;Chaudhuri & Hsu, 2012;Duchi et al, 2013;Dwork et al, 2015) (especially focusing on robust statistics and rate of convergence). But only a handful of works studied the significance and power of hypotheses testing under differential privacy, without arguing that the noise introduced by differential privacy vanishes asymptotically (Vu & Slavkovic, 2009;Uhler et al, 2013;Wang et al, 2015;Rogers et al, 2016).…”
Section: Introductionmentioning
confidence: 99%