2022
DOI: 10.48550/arxiv.2210.00108
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

ImpNet: Imperceptible and blackbox-undetectable backdoors in compiled neural networks

Abstract: Early backdoor attacks against machine learning set off an arms race in attack and defence development. Defences have since appeared demonstrating some ability to detect backdoors in models or even remove them. These defences work by inspecting the training data, the model, or the integrity of the training procedure. In this work, we show that backdoors can be added during compilation, circumventing any safeguards in the data preparation and model training stages. As an illustration, the attacker can insert we… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 27 publications
0
1
0
Order By: Relevance
“…It is worth noting that the poisoned structure is only used during the inference process and will not affect the training process. Backdoor attacks based on poisoned control flow [12,34,53] occur when the victim model is deployed and run as part of software. Attackers insert new node modules into the control flow to inject a backdoor.…”
Section: Related Work 21 Backdoor Attacksmentioning
confidence: 99%
“…It is worth noting that the poisoned structure is only used during the inference process and will not affect the training process. Backdoor attacks based on poisoned control flow [12,34,53] occur when the victim model is deployed and run as part of software. Attackers insert new node modules into the control flow to inject a backdoor.…”
Section: Related Work 21 Backdoor Attacksmentioning
confidence: 99%