2018 IEEE 31st Computer Security Foundations Symposium (CSF) 2018
DOI: 10.1109/csf.2018.00027
|View full text |Cite
|
Sign up to set email alerts
|

Privacy Risk in Machine Learning: Analyzing the Connection to Overfitting

Abstract: Machine learning algorithms, when applied to sensitive data, pose a distinct threat to privacy. A growing body of prior work demonstrates that models produced by these algorithms may leak specific private information in the training data to an attacker, either through the models' structure or their observable behavior. However, the underlying cause of this privacy risk is not well understood beyond a handful of anecdotal accounts that suggest overfitting and influence might play a role.This paper examines the … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

15
787
1

Year Published

2019
2019
2023
2023

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 633 publications
(893 citation statements)
references
References 34 publications
15
787
1
Order By: Relevance
“…We demonstrate that all methods indeed increase the model's membership inference risk. By defining the membership inference advantage as the increase in inference accuracy over random guessing (multiplied by 2) [59], we show that robust machine learning models can incur a membership inference advantage 4.5×, 2.1×, 3.5× times the membership inference advantage of naturally undefended models, on Yale Face, Fashion-MNIST, and CIFAR10 datasets, respectively. (3) We further explore the factors that influence the membership inference performance of the adversarially robust model, including its robustness generalization, the adversarial perturbation constraint, and the model capacity.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…We demonstrate that all methods indeed increase the model's membership inference risk. By defining the membership inference advantage as the increase in inference accuracy over random guessing (multiplied by 2) [59], we show that robust machine learning models can incur a membership inference advantage 4.5×, 2.1×, 3.5× times the membership inference advantage of naturally undefended models, on Yale Face, Fashion-MNIST, and CIFAR10 datasets, respectively. (3) We further explore the factors that influence the membership inference performance of the adversarially robust model, including its robustness generalization, the adversarial perturbation constraint, and the model capacity.…”
Section: Introductionmentioning
confidence: 99%
“…For example, participation in a hospital's health analytic training set means that an individual was once a patient in that hospital. It has been shown that the success of membership inference attacks is highly related to the target model's overfitting and sensitivity as to training data [38,42,59]. Adversarially robust models aim to enhance the robustness of target models by ensuring that model predictions are unchanged for a small area (such as l ∞ ball) around each training example.…”
Section: Introductionmentioning
confidence: 99%
“…The MPLens system is additionally customizable with respect to the attack method, including the shadow model based attack techniques [17], and the threshold-based attack techniques [16]. Furthermore, when using a threshold-based attack, our MPLens system can either accept pre-determined values representing attacker knowledge of the target model error or it can also determine good threshold values through the shadow model training.…”
Section: B Attacker Knowledgementioning
confidence: 99%
“…Side-channel attacks [22,23] Power consumption, Cryptographic keys processing time, access pattern Membership inference attacks [24][25][26] Confidence scores, gradients Member/Non-member Location inference attacks [27,28] Sensor data on smartphone Location Feature inference attacks [29,30] Partial features, model prediction Missing features CAPTCHA breaking attacks [31][32][33] CAPTCHA Text, audio, etc.…”
Section: Introductionmentioning
confidence: 99%
“…These attacks are also called attribute inference attacks[30]. To distinguish with attribute inference attacks in online social networks, we call them feature inference attacks.…”
mentioning
confidence: 99%