2019
DOI: 10.1109/access.2019.2909068
|View full text |Cite
|
Sign up to set email alerts
|

BadNets: Evaluating Backdooring Attacks on Deep Neural Networks

Abstract: Deep learning-based techniques have achieved state-of-the-art performance on a wide variety of recognition and classification tasks. However, these networks are typically computationally expensive to train, requiring weeks of computation on many GPUs; as a result, many users outsource the training procedure to the cloud or rely on pre-trained models that are then fine-tuned for a specific task. In this paper, we show that the outsourced training introduces new security risks: an adversary can create a maliciou… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
659
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 718 publications
(783 citation statements)
references
References 27 publications
0
659
0
Order By: Relevance
“…Existing Backdoor Attacks. Gu et al proposed BadNets that injects a backdoor to a DNN model by poisoning its training dataset [19]. The attacker first chooses a target label and a trigger pattern (i.e.…”
Section: Backdoor Attacks On Dnnmentioning
confidence: 99%
See 4 more Smart Citations
“…Existing Backdoor Attacks. Gu et al proposed BadNets that injects a backdoor to a DNN model by poisoning its training dataset [19]. The attacker first chooses a target label and a trigger pattern (i.e.…”
Section: Backdoor Attacks On Dnnmentioning
confidence: 99%
“…Note that our injection method differs from those used to inject normal backdoors [19,31]. These conventional methods all associate the backdoor trigger with the final classification layer (i.e.…”
Section: Attack Workflowmentioning
confidence: 99%
See 3 more Smart Citations