Our system is currently under heavy load due to increased usage. We're actively working on upgrades to improve performance. Thank you for your patience.
Proceedings of the Web Conference 2021 2021
DOI: 10.1145/3442381.3449965
|View full text |Cite
|
Sign up to set email alerts
|

Not All Features Are Equal: Discovering Essential Features for Preserving Prediction Privacy

Abstract: When receiving machine learning services from the cloud, the provider does not need to receive all features; in fact, only a subset of the features are necessary for the target prediction task. Discerning this subset is the key problem of this work. We formulate this problem as a gradient-based perturbation maximization method that discovers this subset in the input feature space with respect to the functionality of the prediction model used by the provider. After identifying the subset, our framework, Cloak, … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
6
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
3
1

Relationship

0
7

Authors

Journals

citations
Cited by 25 publications
(6 citation statements)
references
References 56 publications
0
6
0
Order By: Relevance
“…Furthermore, trained machine learning models are not easily transported between institutions due to privacy concerns when they are trained on sensitive medical data. 53 Rule-based models, on the other hand, do not suffer from this limitation, and can be shared and reused easily and safely.…”
Section: Discussionmentioning
confidence: 99%
“…Furthermore, trained machine learning models are not easily transported between institutions due to privacy concerns when they are trained on sensitive medical data. 53 Rule-based models, on the other hand, do not suffer from this limitation, and can be shared and reused easily and safely.…”
Section: Discussionmentioning
confidence: 99%
“…The privacy budget ϵ is set to 5. (4) Cloak [25]: We run its official code. Note that when training large datasets, the privacy-accuracy parameter is adjusted according to our experiment setting, which is set to 100.…”
Section: Comparisons With Sota Methodsmentioning
confidence: 99%
“…For instance, [17] used the Mixup [37] method to perturb data while it has been hacked successfully [5]. [25] presented a Gaussian noise disturbance method to suppress unimportant pixels before sending them to the cloud. Recently, some privacy-preserving methods have also appeared in the field of face recognition.…”
Section: Privacy Preservingmentioning
confidence: 99%
“…The authors in Reference [28] proposed a deep learning model that is capable of checking the effectiveness of privacy attacks. The authors in Reference [29] proposed an updated comparison of deep learning‐based privacy preserving methods. Most recently, in study 30 a model has been proposed to deidentify the Italian medical records over COVID‐19 data set.…”
Section: Related Workmentioning
confidence: 99%