2021 IEEE/CVF International Conference on Computer Vision (ICCV) 2021
DOI: 10.1109/iccv48922.2021.01615
|View full text |Cite
|
Sign up to set email alerts
|

Invisible Backdoor Attack with Sample-Specific Triggers

Abstract: Backdoor attacks have emerged as a primary threat to (pre-)training and deployment of deep neural networks (DNNs). While backdoor attacks have been extensively studied in a body of works, most of them were focused on single-trigger attacks that poison a dataset using a single type of trigger. Arguably, real-world backdoor attacks can be much more complex, e.g., the existence of multiple adversaries for the same dataset if it is of high value. In this work, we investigate the practical threat of backdoor attack… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
156
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
5
4

Relationship

1
8

Authors

Journals

citations
Cited by 167 publications
(156 citation statements)
references
References 38 publications
0
156
0
Order By: Relevance
“…After that, (Chen et al, 2017) suggested that the poisoned image should be similar to its benign version for the stealthiness, based on which they proposed the blended attack. Recently, (Xue et al, 2020;Li et al, 2020b;2021c) further explored how to conduct poison-label backdoor attacks more stealthily. Most recently, a more stealthy and effective attack, the WaNet (Nguyen & Tran, 2021), was proposed.…”
Section: Backdoor Attackmentioning
confidence: 99%
“…After that, (Chen et al, 2017) suggested that the poisoned image should be similar to its benign version for the stealthiness, based on which they proposed the blended attack. Recently, (Xue et al, 2020;Li et al, 2020b;2021c) further explored how to conduct poison-label backdoor attacks more stealthily. Most recently, a more stealthy and effective attack, the WaNet (Nguyen & Tran, 2021), was proposed.…”
Section: Backdoor Attackmentioning
confidence: 99%
“…Unlike the previous studies that focus on only a single or a few target labels, cBaN can target any label and still achieve acceptable attack performance, especially when the trigger size is allowed to be large. Li et al [114] also claimed that most of the previous attacks involves using same triggers for different samples which has the weakness to be detected easily using the existing neural Trojan defense techniques. To overcome this, they utilise triggers that are sample-specific.…”
Section: A Training Data Poisoningmentioning
confidence: 99%
“…4. BadNets [5] and Sample-Specific Backdoor [114]. BadNets uses a uniform visible trigger that classify the Trojaned image to the same class whereas Sample-Specific Backdoor can have multiple stealthy triggers and each of them map to a specific class.…”
Section: A Training Data Poisoningmentioning
confidence: 99%
“…In the canonical supply chain backdoor attack, the adversary is assumed to control the training process to insert a backdoor (e.g., BadNets [28]) [59], creating sample-specific triggers using jointly trained encoders [42,49], or using "image styles" as triggers [20]. Recently, the composite attack has been proposed [44], which mixes two examples from different classes to produce a trigger.…”
Section: Attacker Controlled Trainingmentioning
confidence: 99%