2022
DOI: 10.1609/aaai.v36i7.20736
|View full text |Cite
|
Sign up to set email alerts
|

Hard to Forget: Poisoning Attacks on Certified Machine Unlearning

Abstract: The right to erasure requires removal of a user's information from data held by organizations, with rigorous interpretations extending to downstream products such as learned models. Retraining from scratch with the particular user's data omitted fully removes its influence on the resulting model, but comes with a high computational cost. Machine "unlearning" mitigates the cost incurred by full retraining: instead, models are updated incrementally, possibly only requiring retraining when approximation errors ac… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
7
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
2
2

Relationship

0
9

Authors

Journals

citations
Cited by 17 publications
(7 citation statements)
references
References 36 publications
(32 reference statements)
0
7
0
Order By: Relevance
“…They neglect the security challenges introduced by this technique. Although there are a few recent works exploring the possibility of conducting malicious attacks leveraging machine unlearning (Qian et al 2023;Di et al 2022;Marchant, Rubinstein, and Alfeld 2022), those attacks are quite different from the attack discussed in this paper. Backdoor Attacks.…”
Section: Background and Related Workmentioning
confidence: 95%
“…They neglect the security challenges introduced by this technique. Although there are a few recent works exploring the possibility of conducting malicious attacks leveraging machine unlearning (Qian et al 2023;Di et al 2022;Marchant, Rubinstein, and Alfeld 2022), those attacks are quite different from the attack discussed in this paper. Backdoor Attacks.…”
Section: Background and Related Workmentioning
confidence: 95%
“…Currently, there are two works [36], [18] that investigate the potential threats to machine unlearning. The first work [36] proposes slow-down attacks that aim to increase the computational cost of the unlearning process by adding perturbations to the original unlearned samples. The second work [18] proposes targeted attacks that aim to cause the model to misclassify particular target test samples.…”
Section: Difference From Existing Threats To Machine Unlearningmentioning
confidence: 99%
“…According to Theorem 5, Equation (23) will naturally iterate to converge when the spectral radius of (š¼ āˆ’ š» ) is less than one. We take š» āˆ’1 š‘” āˆ‡ šœƒ 0 Ī”L ( G\Ī”G) as the estimation of š» āˆ’1…”
Section: Efficient Estimationmentioning
confidence: 99%
“…GraphEditor [9] is a very recent work that supports exact graph unlearning free from shard model retraining, but is restricted to linear GNN structure under Ridge regression formulation. ā€¢ Another line [8,13,14,23,24,30] resorts to gradient analysis techniques instead to approximate the unlearning process, so as to avoid retraining sub-models from scratch. Among them, influence function [20] is a promising proxy to estimate the parameter changes caused by a sample removal, which is on up-weighting the individual loss w.r.t.…”
Section: Introductionmentioning
confidence: 99%