Proceedings 2021 Network and Distributed System Security Symposium 2021
DOI: 10.14722/ndss.2021.24293
|View full text |Cite
|
Sign up to set email alerts
|

Practical Blind Membership Inference Attack via Differential Comparisons

Abstract: Membership inference (MI) attacks affect user privacy by inferring whether given data samples have been used to train a target learning model, e.g., a deep neural network. There are two types of MI attacks in the literature, i.e., these with and without shadow models. The success of the former heavily depends on the quality of the shadow model, i.e., the transferability between the shadow and the target; the latter, given only blackbox probing access to the target model, cannot make an effective inference of u… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
29
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
4
1

Relationship

1
8

Authors

Journals

citations
Cited by 48 publications
(29 citation statements)
references
References 42 publications
0
29
0
Order By: Relevance
“…In the evaluation, we consider the most widely used datasets, neural network architectures, and optimization approaches following recent research of MIAs [10,23,28,45].…”
Section: Evaluation Setupmentioning
confidence: 99%
“…In the evaluation, we consider the most widely used datasets, neural network architectures, and optimization approaches following recent research of MIAs [10,23,28,45].…”
Section: Evaluation Setupmentioning
confidence: 99%
“…Jayaraman et al [20] analyze MIA in more realistic assumptions by relaxing proportion of training set size and testing set size in the MIA set up to be any positive value instead of 1. Hui et al [18] study MIA in a practical scenario, assuming no true labels of target samples are known and utilizing differential comparison for MIAs. Another threat model for MIAs is that of a white-box setting, i.e., attacker has full access to the model [26,33], which can exploit model parameters to infer membership information.…”
Section: Related Workmentioning
confidence: 99%
“…Membership inference: In membership inference against machine learning classifiers [11,18,19,22,24,31,32,32,35,35,41,42,44,47,48,52], an inferrer aims to infer whether an input is in the training dataset of a classifier (called target classifier). For instance, in the methods proposed by Shokri et al [44], an inferrer first trains shadow classifiers to mimic the behaviors of the target classifier.…”
Section: Related Workmentioning
confidence: 99%
“…Salem et al [42] further improved these methods by relaxing the assumptions about the inferrer. Hui et al [24] proposed blind membership inference methods that do not require training shadow classifiers. Concurrent to our work, He et al [22] also studied membership inference against contrastive learning.…”
Section: Related Workmentioning
confidence: 99%