2022
DOI: 10.48550/arxiv.2201.09538
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Backdoor Defense with Machine Unlearning

Abstract: Backdoor injection attack is an emerging threat to the security of neural networks, however, there still exist limited effective defense methods against the attack. In this paper, we propose BAERASER, a novel method that can erase the backdoor injected into the victim model through machine unlearning. Specifically, BAERASER mainly implements backdoor defense in two key steps. First, trigger pattern recovery is conducted to extract the trigger patterns infected by the victim model. Here, the trigger pattern rec… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 17 publications
0
2
0
Order By: Relevance
“…However, it is worth mentioning that Machine Unlearning is not limited to this use case. In [16], Liu et al, for instance, utilize forgetting in order to remove backdoors that were induced into a model. Since the field of Machine Unlearning is rather young, we believe that applications in many other domains will likely arise soon, e.g., model revision, continual learning and bias correction.…”
Section: Introduction and Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…However, it is worth mentioning that Machine Unlearning is not limited to this use case. In [16], Liu et al, for instance, utilize forgetting in order to remove backdoors that were induced into a model. Since the field of Machine Unlearning is rather young, we believe that applications in many other domains will likely arise soon, e.g., model revision, continual learning and bias correction.…”
Section: Introduction and Related Workmentioning
confidence: 99%
“…Finally, we have those approaches that are neither perfect nor give guarantees [7,8,9,16,22,28,29], but evaluate the success of unlearning purely empirical or compare the resulting model with an actual retrained model. The latter might be interesting from a theoretical point of view but is inapplicable in practice.…”
Section: Introduction and Related Workmentioning
confidence: 99%