2019
DOI: 10.1007/978-3-030-11012-3_25
|View full text |Cite
|
Sign up to set email alerts
|

Are You Tampering with My Data?

Abstract: We propose a novel approach towards adversarial attacks on neural networks (NN), focusing on tampering the data used for training instead of generating attacks on trained models. Our network-agnostic method creates a backdoor during training which can be exploited at test time to force a neural network to exhibit abnormal behaviour. We demonstrate on two widely used datasets (CIFAR-10 and SVHN) that a universal modification of just one pixel per image for all the images of a class in the training set is enough… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
11
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
6
2
1

Relationship

2
7

Authors

Journals

citations
Cited by 15 publications
(12 citation statements)
references
References 54 publications
0
11
0
Order By: Relevance
“…Some representative methods of each of the above approaches are described in the following. 1) Design of strong, ad-hoc, triggering patterns: The first clean-label backdoor attack was proposed by Alberti et al [69] in 2018. The attacker implements a one-pixel modification to all the images of the target class t in the training dataset D tr .…”
Section: B Clean-label Attacksmentioning
confidence: 99%
“…Some representative methods of each of the above approaches are described in the following. 1) Design of strong, ad-hoc, triggering patterns: The first clean-label backdoor attack was proposed by Alberti et al [69] in 2018. The attacker implements a one-pixel modification to all the images of the target class t in the training dataset D tr .…”
Section: B Clean-label Attacksmentioning
confidence: 99%
“…For example, one can look at the diversity in the inherent nature of adversarial examples for humans and computers. While humans can be fooled by simple optical illusions [ 39 ], they would never be fooled by synthetic adversarial images, which are extremely effective at deceiving neural networks [ 13 , 40 ]. Another way to analyse the difference is to look into what types of error humans and networks are more susceptible to when performing object recognition.…”
Section: Related Workmentioning
confidence: 99%
“…This check as one can imagine is very quick and hence does not affect the regular flow or run-time of an experiment. However, this type of verification, albeit quick, is not secure against a malicious manipulation of the data, since a skilled attacker might modify it in subtle ways [9] and tamper the time stamp on the file system too. To combat this -very remotethreat, there is the possibility to activate a deep inspection of the dataset integrity using the stored SHA-1 hashes.…”
Section: ) Data Integrity Managementmentioning
confidence: 99%