2021
DOI: 10.1109/tcsii.2021.3060896
|View full text |Cite
|
Sign up to set email alerts
|

DeepPoison: Feature Transfer Based Stealthy Poisoning Attack for DNNs

Abstract: Deep neural networks are susceptible to poisoning attacks by purposely polluted training data with specific triggers. As existing episodes mainly focused on attack success rate with patch-based samples, defense algorithms can easily detect these poisoning samples. We propose DeepPoison, a novel adversarial network of one generator and two discriminators, to address this problem. Specifically, the generator automatically extracts the target class' hidden features and embeds them into benign training samples. On… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
17
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
2
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 14 publications
(17 citation statements)
references
References 19 publications
0
17
0
Order By: Relevance
“…This approach is more efficient since it does not rely on attacking capability, then the number of samples necessary to cause the target model to drop in accuracy is minimum. A clear example of this is shown in DeepPoison [48] where only 7% of poisoned samples were required to cause an devastating 91% drop in accuracy, demostrating superior robustness when compared to other poisoning schemes such as: BadNets [58], Poison Frog [79], Invisible Poisoning attack [50], Fault Sneaking attack [80] and various backdoor attacks [81]- [83].…”
Section: A Discussion Archivesmentioning
confidence: 99%
See 1 more Smart Citation
“…This approach is more efficient since it does not rely on attacking capability, then the number of samples necessary to cause the target model to drop in accuracy is minimum. A clear example of this is shown in DeepPoison [48] where only 7% of poisoned samples were required to cause an devastating 91% drop in accuracy, demostrating superior robustness when compared to other poisoning schemes such as: BadNets [58], Poison Frog [79], Invisible Poisoning attack [50], Fault Sneaking attack [80] and various backdoor attacks [81]- [83].…”
Section: A Discussion Archivesmentioning
confidence: 99%
“…Chen et al [48] proposes DeepPoison as stealthy featurebased data poisoning attack, capable of generating poisoned training samples indistinguishable from the honest samplies for human visual inspection, then making the poisoned samples mush less identifiable throughout the training process. Also the scheme proposed displays high resistance against other defense methods, since many existing defenses account for attack success rates deployed with patch-based poisoning samples.…”
Section: Attacks Using Gradient Optimization In Nnmentioning
confidence: 99%
“…Turner et al [39] exploit Generative Adversarial Networks (GANs) [18] to generate a label-consistent backdoor attack, where the label of an adversarially poisoned sample is consistent for a human observer but inconsistent for the targeted DNN. Chen et al [6] extend the same idea by utilizing GANs to generate imperceptible adversarial perturbations. Another similar work [9] leverages Style-GAN to generate the poisoned samples.…”
Section: A Attacksmentioning
confidence: 99%
“…We evaluate -attack on different types of defenses: (1) Februus 3 [12] (2) STRIP-ViTA 3 [17], (3) ULPdefense 4 [23] (4) Gradient-shaping 5 [20] and (5) Artificial Brain Stimulation (ABS) 4 [29]. We show that if the attacker uses -attack to poison the model, all these defenses can be significantly compromised, except Februus, which uses heatmaps 6 , instead of the statistical features, for input masking and reconstruction. Februus analyzes the model gradients for a given input to identify key regions contributing to the final decision of the model.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation