2017
DOI: 10.1109/msp.2016.2616720
|View full text |Cite
|
Sign up to set email alerts
|

Compressive Privacy: From Information\/Estimation Theory to Machine Learning [Lecture Notes]

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
36
0

Year Published

2017
2017
2024
2024

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 43 publications
(36 citation statements)
references
References 3 publications
0
36
0
Order By: Relevance
“…While some entities might seek total hiding of their data, DR has another benefit for privacy. For datasets that have samples with two labels: a utility label and a privacy label, Kung 26 proposes a DR method to enable the data owner to project her data in a way that enables maximizing the accuracy of learning for the utility labels, while decreasing the accuracy for learning the privacy labels. Although this method does not eliminate all privacy risks of the data, it enables controlling the misuse of the data when the privacy target is known.…”
Section: Perturbation Approachesmentioning
confidence: 99%
“…While some entities might seek total hiding of their data, DR has another benefit for privacy. For datasets that have samples with two labels: a utility label and a privacy label, Kung 26 proposes a DR method to enable the data owner to project her data in a way that enables maximizing the accuracy of learning for the utility labels, while decreasing the accuracy for learning the privacy labels. Although this method does not eliminate all privacy risks of the data, it enables controlling the misuse of the data when the privacy target is known.…”
Section: Perturbation Approachesmentioning
confidence: 99%
“…However, as the application of privacy is under consideration, extending the embedded feature space can come at a cost of extra information shared. Under the regime of Compressive Privacy [16], [17], this is not desirable. Hence, in the design of the multi-kernel step, the rank of the final multi-kernel should not exceed that of the original data.…”
Section: The Multi-kernelmentioning
confidence: 99%
“…3) Signal-to-Noise Ratio for Kernel Weight Design: The design of the multi-kernel in Equation ( 15) involves the determination of the value of µ l for each kernel. The signalto-noise ratio (SNR) in the form of the trace-norm of the discriminant matrix, as motivated by the inter-class separability metric in Equation ( 31) of [16], is proposed as a new metric to decide the value of µ l . Specifically, the SNR is defined as,…”
Section: The Multi-kernelmentioning
confidence: 99%
See 2 more Smart Citations