2020
DOI: 10.48550/arxiv.2012.13891
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Federated Unlearning

Gaoyang Liu,
Xiaoqiang Ma,
Yang Yang
et al.

Abstract: Data removal from machine learning models has been paid more attentions due to the demands of the "right to be forgotten" and countering data poisoning attacks. In this paper, we frame the problem of federated unlearning, a postprocess operation of the federated learning models to remove the influence of the specified training sample(s). We present FedEraser, the first federated unlearning methodology that can eliminate the influences of a federated client's data on the global model while significantly reducin… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
12
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 7 publications
(12 citation statements)
references
References 36 publications
(49 reference statements)
0
12
0
Order By: Relevance
“…a) a leaves with assured privacy if ψ(w w w t m ) − ψ(w w w t+1 ) ≥ δ b) a leaves with distrust if ψ(w w w t m ) − ψ(w w w t+1 ) < δ System Assumption. Following existing unlearning works [31], [41], we assume a trusted server with an unlearning method in place. We also assume that the local data of the involved participants remains the same in each contributed round of FL.…”
Section: A Unlearning-verification Mechanismmentioning
confidence: 99%
“…a) a leaves with assured privacy if ψ(w w w t m ) − ψ(w w w t+1 ) ≥ δ b) a leaves with distrust if ψ(w w w t m ) − ψ(w w w t+1 ) < δ System Assumption. Following existing unlearning works [31], [41], we assume a trusted server with an unlearning method in place. We also assume that the local data of the involved participants remains the same in each contributed round of FL.…”
Section: A Unlearning-verification Mechanismmentioning
confidence: 99%
“…However, this method reduces training time, but consumes a lot of storage space. Liu [12] proposed data unlearning based on federated learning framework is to save the update parameters of each round in the normal training model aggregation stage, and then reduce the number of iterations of client training when deleting the forgotten data and retraining. Model aggregation combines the parameters of the current client and the updated parameters saved during the previous training to construct the server model.This approach reduces training time, but also takes up more storage space because saving parameters.…”
Section: E Distribution Before and After Unlearningmentioning
confidence: 99%
“…However, this method reduces training time, but consumes a lot of storage space. Another based on the study of data lost work [12], preserved in normal training stage of polymerization model updating parameters of each round, and then delete the memory data to training, reduce the number of iterations client training, the parameters of the model aggregation when the current client before training and keep update the parameters of the combined structure model of a service. This approach reduces training time, but also takes up more storage space because saving parameters.Due to the need to save updated parameters, the training process of the target model is modified, and the saved parameters themselves carry information to be forgotten, which theoretically cannot guarantee the complete unlearning.…”
Section: Introductionmentioning
confidence: 99%
“…[14] studied adding noise on the model parameters to delete a specific class (or a subset of a specific class) in the classification task, the drawback of this method is also the high computational complexity . [21] studied deleting data in federated learning scenario, during the training phase, the central parameter server saves the updated parameters for each round, when deleting data, just retrain the model on remaining data, and retrain phase can be accelerate by load intermediate parameters, the disadvantage of this method is that caching the parameters would consume a lot of storage resource, especially for complex model. [19] studied the problem data deletion in linear models and logistic regression models, and proposed an approximate deletion method, whose computational cost is linear with data dimension.…”
Section: Related Workmentioning
confidence: 99%
“…The disadvantages of this strategy is that when the model is large, storing the model's parameters would consume a lot of storage space. Recently, [21]studied removing data from a trained model in federated learning scenario, similar to [6], the retraining process can be accelerated by caching the intermediate parameters of model. [8]studied how to erase data in random forests.…”
Section: Introductionmentioning
confidence: 99%