Proceedings of the International Conference on Computer-Aided Design 2018
DOI: 10.1145/3240765.3264699
|View full text |Cite
|
Sign up to set email alerts
|

Defensive dropout for hardening deep neural networks under adversarial attacks

Abstract: Deep neural networks (DNNs) are known vulnerable to adversarial attacks. That is, adversarial examples, obtained by adding delicately crafted distortions onto original legal inputs, can mislead a DNN to classify them as any target labels. This work provides a solution to hardening DNNs under adversarial attacks through defensive dropout. Besides using dropout during training for the best test accuracy, we propose to use dropout also at test time to achieve strong defense effects. We consider the problem of bui… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
38
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
4
3
1

Relationship

1
7

Authors

Journals

citations
Cited by 53 publications
(38 citation statements)
references
References 24 publications
0
38
0
Order By: Relevance
“…Breaking the transferability of adversarial examples is a key challenge for the research community. Currently, defensive dropout [130] at test time is a promising defense. Adversarial example detection is also a useful area of research.…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…Breaking the transferability of adversarial examples is a key challenge for the research community. Currently, defensive dropout [130] at test time is a promising defense. Adversarial example detection is also a useful area of research.…”
Section: Discussionmentioning
confidence: 99%
“…Functionality-preserving adversarial examples are an interesting avenue for further research. Adversarial Training [69] Distillation as defense [118] Feedback Alignment [119] Assessing Threat [120] Statistical Test [121] Detector SubNetwork [122] Artifacts [123] MagNet [124] Feature Squeezing [125] GAT [96] EAT [126] Defense-GAN [97] Assessing Threat [127] Stochastic Activation Pruning [128] DeepTest [129] DeepRoad [130] Defensive Dropout [130] Def-IDS [99] Multi-Classifier System [131] Weight Map Layers [132] Sequence Squeezing [105] Feature Removal [133] Adversarial Training [134] Adversarial Training [135] Game Theory [136] Hardening [137] Variational Auto-encoder [138] MANDA…”
Section: Adversarial Example Constraintsmentioning
confidence: 99%
See 1 more Smart Citation
“…Implemented under the L 0 , L 2 and L ∞ norm, the CW Attack [7] is the strongest attack in the literature [40], [44], [61]. It is based on L-BFGS [54] and can be targeted or untargeted.…”
Section: ) Carlini-wagner (Cw) Attackmentioning
confidence: 99%
“…al. 2018;Madry et al 2017;Wang et al 2019;2018a;Xu et al 2019). This work mainly investigates the first category to build the groundwork towards developing potential defensive measures in reliable ML.…”
Section: Introductionmentioning
confidence: 99%