2019
DOI: 10.1109/msec.2018.2888775
|View full text |Cite
|
Sign up to set email alerts
|

Privacy-Preserving Machine Learning: Threats and Solutions

Abstract: For privacy concerns to be addressed adequately in today's machine learning systems, the knowledge gap between the machine learning and privacy communities must be bridged. This article aims to provide an introduction to the intersection of both fields with special emphasis on the techniques used to protect the data.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
153
0
2

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 281 publications
(155 citation statements)
references
References 24 publications
0
153
0
2
Order By: Relevance
“…In PPDP, the explicit identifiers and non-sensitive attributes are removed before the collected data is shared with third parties, while quasi-identifiers (QIs) are generalized or suppressed, and sensitive attributes (SA) are retained as it is for the analytical purposes. In general, those who have collected and mined this data are reluctant to share this data in any way, shape, or form because attackers often posses strong background knowledge or have access to the excessive amount of auxiliary information necessary to breach safeguards used to protect user's privacy [6].…”
Section: Introductionmentioning
confidence: 99%
“…In PPDP, the explicit identifiers and non-sensitive attributes are removed before the collected data is shared with third parties, while quasi-identifiers (QIs) are generalized or suppressed, and sensitive attributes (SA) are retained as it is for the analytical purposes. In general, those who have collected and mined this data are reluctant to share this data in any way, shape, or form because attackers often posses strong background knowledge or have access to the excessive amount of auxiliary information necessary to breach safeguards used to protect user's privacy [6].…”
Section: Introductionmentioning
confidence: 99%
“…These data generators could be very useful to test potential future DR policies, before deploying such solutions in consumer households. Another topic of interest is the extension of the proposed model to consider privacy of the smart meter measurements and where recent research in privacy-preserving machine learning is a promising approach [49].…”
Section: Discussionmentioning
confidence: 99%
“…Protecting sensitive personal data is a major requirement in operating ML systems. In terms of active attacks, the privacy of data subjects can be at risk in multi-party environments [13], [14]. For example, a publicly available trained model can actually leak sensitive information about the training data set to an attacker who is able to run several model executions on appropriately prepared adversarial data.…”
Section: ) Privacy-sensitivitymentioning
confidence: 99%