2022
DOI: 10.1145/3523273
|View full text |Cite
|
Sign up to set email alerts
|

Membership Inference Attacks on Machine Learning: A Survey

Abstract: Machine learning (ML) models have been widely applied to various applications, including image classification, text generation, audio recognition, and graph data analysis. However, recent studies have shown that ML models are vulnerable to membership inference attacks (MIAs), which aim to infer whether a data record was used to train a target model or not. MIAs on ML models can directly lead to a privacy breach. For example, via identifying the fact that a clinical record that has been used to train a model as… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
58
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 160 publications
(84 citation statements)
references
References 78 publications
0
58
0
Order By: Relevance
“…Nasr et al [39] instantiated the hypothetical adversary to analyse the differential private ML especially DP-SGD. There are also some schemes [33][34][35][36][37] that set the goal of protection as a generative model. Hayes et al [33] first applied the membership inference attack on the generative model and utilized differential privacy during the training stage to protect the generative model.…”
Section: Differential Privacymentioning
confidence: 99%
“…Nasr et al [39] instantiated the hypothetical adversary to analyse the differential private ML especially DP-SGD. There are also some schemes [33][34][35][36][37] that set the goal of protection as a generative model. Hayes et al [33] first applied the membership inference attack on the generative model and utilized differential privacy during the training stage to protect the generative model.…”
Section: Differential Privacymentioning
confidence: 99%
“…We focus only on related works that are directly related to our contributions. See Hu et al [15] for a comprehensive survey of MIAs.…”
Section: Other Related Workmentioning
confidence: 99%
“…We now present related work pertinent to MIAShield. We refer the reader to [10] for a comprehensive survey.…”
Section: Related Workmentioning
confidence: 99%
“…This difference is exploited by an adversary as a strong signal to learn a member/nonmember decision function (attack model) or some threshold to flag members. Based on the information they leverage, MIAs can either be probability-dependent attacks (use confidence scores predicted for each class) or label-dependent attacks (use just the predicted label) [10].…”
Section: Introductionmentioning
confidence: 99%