2022
DOI: 10.48550/arxiv.2205.14055
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Benign Overparameterization in Membership Inference with Early Stopping

Abstract: Does a neural network's privacy have to be at odds with its accuracy? In this work, we study the effects the number of training epochs and parameters have on a neural network's vulnerability to membership inference (MI) attacks, which aim to extract potentially private information about the training data. We first demonstrate how the number of training epochs and parameters individually induce a privacy-utility trade-off: more of either improves generalization performance at the expense of lower privacy. Howev… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Publication Types

Select...

Relationship

0
0

Authors

Journals

citations
Cited by 0 publications
references
References 30 publications
0
0
0
Order By: Relevance

No citations

Set email alert for when this publication receives citations?