2019 Design, Automation &Amp; Test in Europe Conference &Amp; Exhibition (DATE) 2019
DOI: 10.23919/date.2019.8715141
|View full text |Cite
|
Sign up to set email alerts
|

FAdeML: Understanding the Impact of Pre-Processing Noise Filtering on Adversarial Machine Learning

Abstract: Deep neural networks (DNN)-based machine learning (ML) algorithms have recently emerged as the leading ML paradigm particularly for the task of classification due to their superior capability of learning efficiently from large datasets. The discovery of a number of well-known attacks such as dataset poisoning, adversarial examples, and network manipulation (through the addition of malicious nodes) has, however, put the spotlight squarely on the lack of security in DNN-based ML systems. In particular, malicious… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
28
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
3
2
2

Relationship

5
2

Authors

Journals

citations
Cited by 33 publications
(32 citation statements)
references
References 36 publications
0
28
0
Order By: Relevance
“…Different pre-processing techniques can elude the effectiveness of adversarial attacks. Simple pre-processing filters [304] completely alter the functionality of the attack. Randomized smoothing [305] produces a Gaussian noise at the input to mitigate the effect of the adversarial perturbations on the inputs.…”
Section: B Adversarial Defensesmentioning
confidence: 99%
“…Different pre-processing techniques can elude the effectiveness of adversarial attacks. Simple pre-processing filters [304] completely alter the functionality of the attack. Randomized smoothing [305] produces a Gaussian noise at the input to mitigate the effect of the adversarial perturbations on the inputs.…”
Section: B Adversarial Defensesmentioning
confidence: 99%
“…10: (a) An overview of the different security attacks and corresponding defense strategies for machine learning, especially DNNs. (b) An example of our training dataset-unaware imperceptible attack (e.g., TrISec [29]) and pre-processing based defense strategies (e.g., QuSecNets [30], FAdeML [31]).…”
Section: Machine Learning Securitymentioning
confidence: 99%
“…These defense strategies can be countered by using pruning or weight sensitivity analysis before adding the neural Trojans or before training it for backdoors [12]. Several defense strategies have been proposed to counter the adversarial attacks [77], e.g., DNN masking, gradient masking, adversarial learning, generative adversarial network based defense, data augmentation, and pre-process the input data (e.g., noise filtering [31], quantization [30]) to detect remove or make the adversarial noise perceptible. For example, recently, it has been studied that even low pass noise filtering at the input of the DNN can neutralize the adversarial examples if the attacker is unaware of it [31].…”
Section: B Defensesmentioning
confidence: 99%
See 1 more Smart Citation
“…Moreover, for most of these techniques, the noise pattern is visible and can be removed by inspection. 2) Inference Data Poisoning (IDP): This attack exploits the black-box model of the ML-modules to learn the noise patterns which can perform misclassification or confidence reduction [20] [21], [22]. However, these learned noise patterns can be visible [23] or imperceptible.…”
Section: A Security Threats In Dnn Modulesmentioning
confidence: 99%