2021
DOI: 10.48550/arxiv.2109.08266
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Hard to Forget: Poisoning Attacks on Certified Machine Unlearning

Abstract: The right to erasure requires removal of a user's information from data held by organizations, with rigorous interpretations extending to downstream products such as learned models. Retraining from scratch with the particular user's data omitted fully removes its influence on the resulting model, but comes with a high computational cost. Machine "unlearning" mitigates the cost incurred by full retraining: instead, models are updated incrementally, possibly only requiring retraining when approximation errors ac… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

1
0

Authors

Journals

citations
Cited by 1 publication
(4 citation statements)
references
References 26 publications
0
4
0
Order By: Relevance
“…Specifically, we make the replacement θ = arg max θ R(θ; D cln ) in ( 9) so that θ is constant with respect to D psn . This means evaluating the objective (or its gradient) no longer requires retraining and simplifies gradient computations (see Appendix A of our extended paper Marchant, Rubinstein, and Alfeld 2021). Although this approximation is based on a dubious assumption-the model must be somewhat sensitive to D psn -we find it performs well empirically (see Table 4).…”
Section: Measuring the Computational Costmentioning
confidence: 78%
See 3 more Smart Citations
“…Specifically, we make the replacement θ = arg max θ R(θ; D cln ) in ( 9) so that θ is constant with respect to D psn . This means evaluating the objective (or its gradient) no longer requires retraining and simplifies gradient computations (see Appendix A of our extended paper Marchant, Rubinstein, and Alfeld 2021). Although this approximation is based on a dubious assumption-the model must be somewhat sensitive to D psn -we find it performs well empirically (see Table 4).…”
Section: Measuring the Computational Costmentioning
confidence: 78%
“…However as poisoned instances are erased, the defender's model approaches the attacker's model, and the poisoned instances become more effective. Additional results on long-term effectiveness are included in Appendix C of our extended paper (Marchant, Rubinstein, and Alfeld 2021).…”
Section: Resultsmentioning
confidence: 99%
See 2 more Smart Citations