Proceedings of the Eleventh ACM Conference on Data and Application Security and Privacy 2021
DOI: 10.1145/3422337.3447836
|View full text |Cite
|
Sign up to set email alerts
|

Membership Inference Attacks and Defenses in Classification Models

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
41
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 54 publications
(41 citation statements)
references
References 18 publications
0
41
0
Order By: Relevance
“…Exploratory attacks are used to extract information from models, and various defenses have been proposed. The dominant attack most related to our work is the membership inference attack, and many defenses [29,32,42] have been proposed. However, most works assume access to the victim's model.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Exploratory attacks are used to extract information from models, and various defenses have been proposed. The dominant attack most related to our work is the membership inference attack, and many defenses [29,32,42] have been proposed. However, most works assume access to the victim's model.…”
Section: Related Workmentioning
confidence: 99%
“…For example, MemGuard [29] is a state-of-the-art defense that adds noise to the model's output to drop the attack model's performance. Other techniques include adding a regularizer to the model's loss function [32] and applying dropout or model stacking techniques [42]. However, such model modifications are not possible in our setting where we assume no access to the model.…”
Section: Related Workmentioning
confidence: 99%
“…Li et al [39] proposed a defense based on a new regularizer (using the maximum mean discrepancy [4]) and mixup-training [85]. This method requires labeled reference data to compute the regularization term.…”
Section: Related Workmentioning
confidence: 99%
“…The privacy risks of DNNs have already been pointed out, where a DNN is prone to memorizing sensitive information of the training dataset [6][7][8][9]. Taking the membership inference attack (MIA) as an example, an adversary can infer whether a given data sample was used to train a DNN, seriously threatening the individual privacy.…”
Section: Introductionmentioning
confidence: 99%
“…In both prediction confidence and sensitivity measurements, neural network pruning makes the distances between the two vertical lines in the pruned model larger than that in the original model, which indicates a larger confidence gap and sensitivity gap between members and non-members due to pruning. members (i.e., training samples) and non-members (i.e., test samples), such as the different prediction confidences [9,10]. Since most neural network pruning approaches rely on reusing training dataset to fine-tune the parameters after pruning the insignificant parameters, the additional training at the pruned neural network inevitably increases its memorization of the training samples.…”
Section: Introductionmentioning
confidence: 99%