2019
DOI: 10.3390/sym11070892
|View full text |Cite
|
Sign up to set email alerts
|

Selective Poisoning Attack on Deep Neural Networks †

Abstract: Studies related to pattern recognition and visualization using computer technology have been introduced. In particular, deep neural networks (DNNs) provide good performance for image, speech, and pattern recognition. However, a poisoning attack is a serious threat to a DNN’s security. A poisoning attack reduces the accuracy of a DNN by adding malicious training data during the training process. In some situations, it may be necessary to drop a specifically chosen class of accuracy from the model. For example, … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
6
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 16 publications
(6 citation statements)
references
References 23 publications
0
6
0
Order By: Relevance
“…Specifically, we first utilize training set to obtain a discriminant hyperplane, and then stochastically select samples far away from the discriminant hyperplane to flip their labels. Another method is to inject poison samples (Jiang et al 2019;Kwon, Yoon, and Park 2019;Zhang, Zhu, and Lessard 2020). Specifically, we generate poison samples for each dataset according to the poisoning attack method 5 (Biggio, Nelson, and Laskov 2012), and inject these poison samples into training set to form a noisy dataset.…”
Section: Methodsmentioning
confidence: 99%
“…Specifically, we first utilize training set to obtain a discriminant hyperplane, and then stochastically select samples far away from the discriminant hyperplane to flip their labels. Another method is to inject poison samples (Jiang et al 2019;Kwon, Yoon, and Park 2019;Zhang, Zhu, and Lessard 2020). Specifically, we generate poison samples for each dataset according to the poisoning attack method 5 (Biggio, Nelson, and Laskov 2012), and inject these poison samples into training set to form a noisy dataset.…”
Section: Methodsmentioning
confidence: 99%
“…Poison attack refers to the malicious participants' use of training set to manipulate model prediction during FL training 43 . According to the adversary's ability, the attack methods are divided into model poisoning attack and data poisoning attack.…”
Section: Security Challengesmentioning
confidence: 99%
“…Poisoning attacks, which aim at degrading the performance of the model or creating a backdoor in the model so as to control its behavior, occur during the training phase. The attacker adds elaborately constructed malicious samples to the training set, thus causing the trained model to output the attacker's expected results for specific samples or reducing the classification accuracy of the model at test time [10][11][12][13][14][15][16][17][18][19][20].…”
Section: Poisoning Attacks and Evasion Attacksmentioning
confidence: 99%
“…Poisoning attack occurs during training. The attacker adds elaborately constructed malicious samples to the training set to manipulate the behavior of model at test time, causing the model to output the attacker's expected results for specific samples, or reducing the classification accuracy of the model [10][11][12][13][14][15][16][17][18][19].…”
Section: Poisoning Attack and Evasion Attackmentioning
confidence: 99%