2021
DOI: 10.48550/arxiv.2111.04609
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

A Private and Computationally-Efficient Estimator for Unbounded Gaussians

Abstract: We give the first polynomial-time, polynomial-sample, differentially private estimator for the mean and covariance of an arbitrary Gaussian distribution N ( , Σ) in R . All previous estimators are either nonconstructive, with unbounded running time, or require the user to specify a priori bounds on the parameters and Σ. The primary new technical tool in our algorithm is a new differentially private preconditioner that takes samples from an arbitrary Gaussian N (0, Σ) and returns a matrix such that Σ has consta… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
8
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(8 citation statements)
references
References 14 publications
0
8
0
Order By: Relevance
“…In an independent work, Kamath, Mouzakis, Singhal, Steinke, and Ullman [22] propose a different polynomial time algorithm for privately learning unbounded high-dimensional Gaussian distributions. At a high-level, they devise an iterative preconditioning method for the covariance matrix (in the spirit of [4,21]).…”
Section: Discussion Of a Concurrent Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…In an independent work, Kamath, Mouzakis, Singhal, Steinke, and Ullman [22] propose a different polynomial time algorithm for privately learning unbounded high-dimensional Gaussian distributions. At a high-level, they devise an iterative preconditioning method for the covariance matrix (in the spirit of [4,21]).…”
Section: Discussion Of a Concurrent Resultsmentioning
confidence: 99%
“…For learning a subspace approximately, they provided an algorithm that was sample-efficient. The latter was also used (with some modifications) in [22] to efficiently learn the covariance matrix of a Gaussian distribution. In this section, we give a computationally-efficient algorithm for learning a subspace exactly with a slightly different sample complexity (albeit with different assumptions).…”
Section: Learning the Subspacementioning
confidence: 99%
“…Previous work on mean estimation under local differential privacy (Duchi et al 2018) assumes that D = 1 and deploys a Laplace privacy mechanism, which we show is sub-optimal when D is large. Previous work has noted the difficulty of private estimation with unbounded parameter spaces, both in the central model of privacy (Brunel & Avella-Medina 2020, Kamath et al 2021) and the local model (Duchi et al 2013). In the central model, bounds on the unknown mean can be avoided with careful procedures, but in the local model consistent estimation is impossible when D = ∞, even without contamination (see Appendix G, Duchi et al 2013).…”
Section: A Summary Of Our Contributionsmentioning
confidence: 99%
“…We need to find an approximate range of the minibatch of gradients in order to adaptively truncate the gradients and bound the sensitivity. Inspired by a private preconditioning mechanism designed for mean estimation with unknown covariance from [46], we propose to use privately estimated top eigenvalue of the covariance matrix of the gradients. For details on the version of the histogram learner we use in Alg.…”
Section: Differentially Private Principal Component Analysis (Dp-pca)mentioning
confidence: 99%
“…4 in Appendix E.2, we refer to [55,Lemma D.1]. Unlike the private preconditioning of [46] that estimates all eigenvalues and requires n = O(d 3/2 log(1/δ)/ε) samples, we only require the top eigenvalue and hence the next theorem shows that we only need n = O(d log(1/δ)/ε). Theorem 6.1.…”
Section: Differentially Private Principal Component Analysis (Dp-pca)mentioning
confidence: 99%