2022
DOI: 10.48550/arxiv.2202.03423
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Backdoor Defense via Decoupling the Training Process

Abstract: Recent studies have revealed that deep neural networks (DNNs) are vulnerable to backdoor attacks, where attackers embed hidden backdoors in the DNN model by poisoning a few training samples. The attacked model behaves normally on benign samples, whereas its prediction will be maliciously changed when the backdoor is activated. We reveal that poisoned samples tend to cluster together in the feature space of the attacked DNN model, which is mostly due to the endto-end supervised training paradigm. Inspired by th… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
15
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
2
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 9 publications
(15 citation statements)
references
References 25 publications
(46 reference statements)
0
15
0
Order By: Relevance
“…Poison Suppression. For poison suppression defense, most methods (e.g., DBD [22]) learn a backbone of a DNN model via self-supervised learning based on training samples without their labels to capture those suspicious training data during the training process. We tested the performance of DBD [22] defense against models attacked by IMC [33] with and without GRASP enhancement.…”
Section: Resilience To Backdoor Mitigationmentioning
confidence: 99%
See 2 more Smart Citations
“…Poison Suppression. For poison suppression defense, most methods (e.g., DBD [22]) learn a backbone of a DNN model via self-supervised learning based on training samples without their labels to capture those suspicious training data during the training process. We tested the performance of DBD [22] defense against models attacked by IMC [33] with and without GRASP enhancement.…”
Section: Resilience To Backdoor Mitigationmentioning
confidence: 99%
“…For poison suppression defense, most methods (e.g., DBD [22]) learn a backbone of a DNN model via self-supervised learning based on training samples without their labels to capture those suspicious training data during the training process. We tested the performance of DBD [22] defense against models attacked by IMC [33] with and without GRASP enhancement. As shown in Table 4, the ASR of the DBD in the IMC* attacked models are higher than the IMC attacked models, indicating GRASP enhancement does not make the attack more easily to be mitigated by the DBD.…”
Section: Resilience To Backdoor Mitigationmentioning
confidence: 99%
See 1 more Smart Citation
“…Neural Cleanse [44], ANP [63], ABL [30], DBD [64] Activation Clustering [40], Spectral-signatures [41],…”
Section: Toolmentioning
confidence: 99%
“…Backdoor Defenses. By exploiting certain properties of some backdoor attacks, various backdoor defenses [3,10,11,16,19,22,25,39,42,44,46,49,15] are also developed. As typical examples, Neural Cleanse [46] and SentiNet [7] propose to reverse engineer the backdoor triggers via searching for universal adversarial perturbations.…”
Section: Background and Related Workmentioning
confidence: 99%