2021
DOI: 10.1093/imaiai/iaab005
|View full text |Cite
|
Sign up to set email alerts
|

Compressive learning with privacy guarantees

Abstract: This work addresses the problem of learning from large collections of data with privacy guarantees. The compressive learning framework proposes to deal with the large scale of datasets by compressing them into a single vector of generalized random moments, called a sketch vector, from which the learning task is then performed. We provide sharp bounds on the so-called sensitivity of this sketching mechanism. This allows us to leverage standard techniques to ensure differential privacy—a well-established formali… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
1
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(3 citation statements)
references
References 41 publications
0
3
0
Order By: Relevance
“…By simply sharing the sketch vectors and the random projection matrices, different users can simulate new data by training GMMNs. Furthermore, as shown in [21] some computations can be already performed directly with the sketches, without the need of simulation of actual sequences. However, the sharing of sketches and/or trained networks, does not provide any strong guarantees on the privacy of the sequences used for training.…”
Section: Proposed Methodsmentioning
confidence: 99%
“…By simply sharing the sketch vectors and the random projection matrices, different users can simulate new data by training GMMNs. Furthermore, as shown in [21] some computations can be already performed directly with the sketches, without the need of simulation of actual sequences. However, the sharing of sketches and/or trained networks, does not provide any strong guarantees on the privacy of the sequences used for training.…”
Section: Proposed Methodsmentioning
confidence: 99%
“…The variance needed to achieve a given privacy level can be determined by analyzing the so-called sensitivity of the noiseless sketch, i.e., the biggest possible change that can result from removing one sample. When using the random Fourier feature map (3), which generates complex-valued z(X ), it has been established [57,58] that it is sufficient for the real and imaginary components of v to be i.i.d. Laplacian with standard deviation σ v ∝ m n .…”
Section: Sketching With Differential Privacy Guaranteesmentioning
confidence: 99%
“…Another approach is to randomly mask each feature vector Φ(x i ) prior to averaging, i.e., set a random subset of its components to zero. It has been established [58] that such random masking does not increase nor decrease the differential privacy level . But it does reduce the need to compute all entries of each feature vector, and thus reduces sketching complexity.…”
Section: Perspectives and Open Challengesmentioning
confidence: 99%