2022
DOI: 10.1002/int.23000
|View full text |Cite
|
Sign up to set email alerts
|

Label‐only membership inference attacks on machine unlearning without dependence of posteriors

Abstract: Machine unlearning is the process through which a deployed machine learning model is enforced to forget about some of its training data items. It normally generates two machine learning models, the original model and the unlearned model, indicating training results before and after data items are deleted. However, recent studies find that machine unlearning is vulnerable to membership inference attacks-as the directivity of training and nontraining data (i.e., data items in the training set have high posterior… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
2
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
1
1

Relationship

3
4

Authors

Journals

citations
Cited by 8 publications
(3 citation statements)
references
References 41 publications
0
2
0
Order By: Relevance
“…Federated Reinforcement Learning [31] has also recently emerged. Security issues [32][33][34][35][36][37][38], especially in vertical federated learning, has got arouse wide concern [39,40]. Wei et al [41] survey on the issues of security and privacy in VFL.…”
Section: Related Workmentioning
confidence: 99%
“…Federated Reinforcement Learning [31] has also recently emerged. Security issues [32][33][34][35][36][37][38], especially in vertical federated learning, has got arouse wide concern [39,40]. Wei et al [41] survey on the issues of security and privacy in VFL.…”
Section: Related Workmentioning
confidence: 99%
“…Federated Reinforcement Learning [31] has also recently emerged. Security issues [32][33][34][35][36][37][38], especially in vertical federated learning, has got arouse wide concern [39,40]. Wei et al [41] survey on the issues of security and privacy in VFL.…”
Section: Related Workmentioning
confidence: 99%
“…On the other hand, these data threaten users' privacy and increase the risk of data leakage [6][7][8][9]. Furthermore, privacy regulations, such as the European General Data Protection Regulation (GDPR) [10], allow users to request the deletion of their personal data from learning models as part of the "right to be forgotten".…”
Section: Introductionmentioning
confidence: 99%