Proceedings of the ACM Web Conference 2022 2022
DOI: 10.1145/3485447.3512222
|View full text |Cite
|
Sign up to set email alerts
|

Federated Unlearning via Class-Discriminative Pruning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
11
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
4
1

Relationship

0
9

Authors

Journals

citations
Cited by 34 publications
(15 citation statements)
references
References 18 publications
0
11
0
Order By: Relevance
“…For deep neural networks, Golatkar, Achille, and Soatto (2020) try to add a fisher noise to hide the information about unlearn data. The work closest to ours is (Wang et al 2022) (which is a concurrent work). They try to scrub memories for each category in federated learning.…”
Section: Related Workmentioning
confidence: 99%
“…For deep neural networks, Golatkar, Achille, and Soatto (2020) try to add a fisher noise to hide the information about unlearn data. The work closest to ours is (Wang et al 2022) (which is a concurrent work). They try to scrub memories for each category in federated learning.…”
Section: Related Workmentioning
confidence: 99%
“…We often consider backdoor attacks as a great threat in FL while ignoring its potential advantages. Indeed, a backdoor attack demonstrated its sake under the unlearning scenario, which is a technique in FL [28,128,129] focusing on removing or revoking access to data, participants, or parts of the model, with the goal of improving the integrity and accuracy of the model. Backdoor triggers are utilized as an evaluation tool to assess the effectiveness of unlearning methods [130].…”
Section: Discussionmentioning
confidence: 99%
“…The simple models such as linear and logistic regression [4,5,6,7], ran-dom forests [8], and k-means clustering [9,10] have been explored in an provable unlearning setup. Furthermore, the deep learning models such as convolutional neural networks [11,12,13,14,15,16] and vision transformers [17] have been explored with approximate unlearning setup. All these existing methods are aimed at unlearning in classification problems.…”
Section: Motivationmentioning
confidence: 99%
“…This approach allows the model owner to enable or disable the information of certain tasks or samples at multiple instances. Several works have presented efficient methods for unlearning in federated learning setup [14,31,32]. Bevan and Abarghouei [33] use the bias unlearning methods [34] to remove bias from CNN based melanoma classification.…”
Section: Related Workmentioning
confidence: 99%