2021
DOI: 10.1016/j.tcs.2021.03.001
|View full text |Cite
|
Sign up to set email alerts
|

Differentially private high dimensional sparse covariance matrix estimation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
3
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
4

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(13 citation statements)
references
References 7 publications
0
3
0
Order By: Relevance
“…Specifically, the framework leverages a deep learning-based data minimization model to constrain the data size and obfuscates the learned features from original data by adaptively injecting noise to achieve the LDP, hence constructing a novel privacy preservation layer against sensitive information inference on the edge server. [104] focused on the study of sparse covariance matrix estimation problem under the differential privacy, proposed a novel DP-resholding method to achieve the ℓ 2 -norm error bound, and further extended the bound to a general-norm based one (1 ≤ w ≤ ∞). e method is significantly better than the one adding noise directly and easily extended to the LDP model.…”
Section: Private Federated Learning and Deep Learning In The Local Modelmentioning
confidence: 99%
“…Specifically, the framework leverages a deep learning-based data minimization model to constrain the data size and obfuscates the learned features from original data by adaptively injecting noise to achieve the LDP, hence constructing a novel privacy preservation layer against sensitive information inference on the edge server. [104] focused on the study of sparse covariance matrix estimation problem under the differential privacy, proposed a novel DP-resholding method to achieve the ℓ 2 -norm error bound, and further extended the bound to a general-norm based one (1 ≤ w ≤ ∞). e method is significantly better than the one adding noise directly and easily extended to the LDP model.…”
Section: Private Federated Learning and Deep Learning In The Local Modelmentioning
confidence: 99%
“…We used a very simple algorithm for the differential privacy technique. This section examines what happens when we use more advanced differential privacy techniques, such as [15], [36], [37], [38]. However, the techniques used in this article and more advanced differential privacy techniques can strictly adhere to a given privacy parameter, .…”
Section: Discussionmentioning
confidence: 99%
“…Moreover, the distribution assumption is required to be Gaussian. Recently, Wang and Xu [10] reduced the distribution assumption from Gaussian to sub-Gaussian and imposed the sparse structure on the covariance matrix, which led to a low convergence rate, while using a DPA and LDPA to preserve privacy.…”
Section: Related Work and Our Contributionsmentioning
confidence: 99%
“…The DPA and LDPA aim at hiding the true information while keeping the basic property of the whole dataset. A popular idea to achieve this goal is to add some special noise into the original model [9][10][11].…”
Section: Introductionmentioning
confidence: 99%