2022
DOI: 10.48550/arxiv.2203.02077
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

User-Level Membership Inference Attack against Metric Embedding Learning

Abstract: Membership inference (MI) determines if a sample was part of a victim model training set. Recent development of MI attacks focus on record-level membership inference which limits their application in many real-world scenarios. For example, in the person re-identification task, the attacker (or investigator) is interested in determining if a user's images have been used during training or not. However, the exact training images might not be accessible to the attacker. In this paper, we develop a user-level MI a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(4 citation statements)
references
References 8 publications
0
4
0
Order By: Relevance
“…Membership inference attacks (MI) can reveal if specific data samples were present in a model's training dataset [63]. Using MI to audit training data has been considered in images, speech, machine translation, and metric-embedding domains [27,39,47,67]. Unfortunately, MI remains unreliable for many (non-outlier) data samples, and generally requires significant data and compute to train multiple shadow models to approximate the behavior of F [63].…”
Section: User Data-level Modificationsmentioning
confidence: 99%
“…Membership inference attacks (MI) can reveal if specific data samples were present in a model's training dataset [63]. Using MI to audit training data has been considered in images, speech, machine translation, and metric-embedding domains [27,39,47,67]. Unfortunately, MI remains unreliable for many (non-outlier) data samples, and generally requires significant data and compute to train multiple shadow models to approximate the behavior of F [63].…”
Section: User Data-level Modificationsmentioning
confidence: 99%
“…However, our paper makes a thorough inquiry on the distribution gap of similarities. Li, Rezaei, and Liu (2022) propose a user-level MI attack in metric embedding learning. This approach is based on an assumption that data from the same category forms a more compact cluster in the training set than the test set, and uses the average and pair-wise intra-class distance as features to conduct user-level membership inference.…”
Section: Related Workmentioning
confidence: 99%
“…low number of samples causes low attack success rate). While Li, Rezaei, and Liu (2022) only focuses on average and pair-wise distance on intra-class samples, our method proposes to look at more general similarity distribution over all sample pairs (both intra-and inter-class similarity). Furthermore, our method does not require multiple samples for each identity.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation