2016
DOI: 10.48550/arxiv.1610.05820
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Membership Inference Attacks against Machine Learning Models

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
53
1

Year Published

2017
2017
2024
2024

Publication Types

Select...
3
2
2

Relationship

0
7

Authors

Journals

citations
Cited by 23 publications
(55 citation statements)
references
References 0 publications
1
53
1
Order By: Relevance
“…As such, it is hard to provide guarantees that participation in a dataset does not harm the privacy of an individual. Potential risks are adversaries performing membership test (to know whether an individual is in a dataset or not) [36], recovering of partially known inputs (use the model to complete an input vector with the most likely missing bits), and extraction of the training data using the model's predictions [35].…”
Section: Adversarial Goalsmentioning
confidence: 99%
“…As such, it is hard to provide guarantees that participation in a dataset does not harm the privacy of an individual. Potential risks are adversaries performing membership test (to know whether an individual is in a dataset or not) [36], recovering of partially known inputs (use the model to complete an input vector with the most likely missing bits), and extraction of the training data using the model's predictions [35].…”
Section: Adversarial Goalsmentioning
confidence: 99%
“…We found that there is a trade-off between the quality of the generated samples and the privacy guarantees of the generative model. When using the discriminator for attacks, the success rates are similar to those found for classifiers [16], [18], [19]. Yet, with proper training setup, the model can be robust against membership inference attacks, which provides a strong guarantee against other attacks, like attribute inference.…”
Section: Summary and Concluding Remarksmentioning
confidence: 62%
“…These attacks can identify members of the training set of an ML algorithm and use side information provided by the algorithm to determine arXiv:2012.04475v1 [eess.SP] 6 Dec 2020 properties of the identified client. Thorough analysis of these attacks is available in [16], [18], [19]. However, little research has been done on the privacy leakage of generative models.…”
Section: A Related Workmentioning
confidence: 99%
See 2 more Smart Citations