2019
DOI: 10.1609/aaai.v33i01.3301541
|View full text |Cite
|
Sign up to set email alerts
|

Resisting Adversarial Attacks Using Gaussian Mixture Variational Autoencoders

Abstract: Susceptibility of deep neural networks to adversarial attacks poses a major theoretical and practical challenge. All efforts to harden classifiers against such attacks have seen limited success till now. Two distinct categories of samples against which deep neural networks are vulnerable, "adversarial samples" and "fooling samples", have been tackled separately so far due to the difficulty posed when considered together. In this work, we show how one can defend against them both under a unified framework. Our … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
48
1

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
2
2

Relationship

0
8

Authors

Journals

citations
Cited by 57 publications
(49 citation statements)
references
References 0 publications
0
48
1
Order By: Relevance
“…In addition, the proposed method does not need any adversarial training, and the inference is also time-efficient. Therefore, we evaluate our approach on the full ImageNet validation set which is closer to the real-world setting than other related works on MNIST and CIFAR-10 [15,16,25,26]. MNIST and CIFAR-10 are of very low resolution (28×28, 32×32) and far from the real-world setting.…”
Section: Evaluation Results On Imagenetmentioning
confidence: 99%
“…In addition, the proposed method does not need any adversarial training, and the inference is also time-efficient. Therefore, we evaluate our approach on the full ImageNet validation set which is closer to the real-world setting than other related works on MNIST and CIFAR-10 [15,16,25,26]. MNIST and CIFAR-10 are of very low resolution (28×28, 32×32) and far from the real-world setting.…”
Section: Evaluation Results On Imagenetmentioning
confidence: 99%
“…To make a fair comparison and prove the effectiveness of our method, we choose vanilla VAE and several improved versions which have been reported in the same network architecture. The baseline models include constantvariance VAE (CV-VAE) [2,8], Wasserstein VAE (WAE) [28] , 2-stage VAE (2s-VAE) [7] and Regularized AutoEncoders (RAE) [9].…”
Section: Quantitive and Qualitative Results For Image Generationmentioning
confidence: 99%
“…Defense Strength Defense Complexity Experimental Setup Research Impact Statistical Detection [67] Guard -Adversarial Detector * ** ** ** Binary Classification [62] Guard -Adversarial Detector * ** * * In-Layer Detection [117] Guard -Adversarial Detector * ** *** ** Detecting from Artifacts [52] Guard -Adversarial Detector * ** ** ** SafetyNet [111] Guard -Adversarial Detector * ** ** * Convolutional Statistics Detector [104] Guard -Adversarial Detector * ** ** * Saliency Data Detector [183] Guard -Adversarial Detector * ** * * Ensemble Detectors [1] Guard -Adversarial Detector * ** * * MagNet [116] Guard -Adversarial Detector * ** *** ** Generative Detector [102] Guard -Adversarial Detector * ** * * PixelDefend [154] Guard -Adversarial Detector * ** *** * VAE Detector [57] Guard -Adversarial Detector * *** ** * Bit-Depth [70] Guard -Input Transformation * * ** ** Basis Transformations [148] Guard -Input Transformation * * ** * Randomized Transformations [177] Guard -Input Transformation * * *** * Thermometer Encoding [20] Guard -Input Transformation * * *** * Blind Pre-Processing [136] Guard -Input Transformation * * * * Data Discretization [28] Guard -Input Transformation * * ** * Adaptive Noise [105] Guard -Input Transformation * * * * FGSM Training [65] Design -Adversarial Training * * ** *** Gradient Training [152] Design -Adversarial Training * * * * Gradient Regularization [114] Design -Adversarial Training * * * * Structured Regularization [139] Design -Adversarial Training * * ** * Robust Training [149] Design -Adversarial Training ** * ** ** Strong Adversary Training [79] Design -Adversarial Training * ** * ** Madry [115] Design -Adversarial Training *** ** *** *** Ensemble Training [165] Design -Adversarial Training ** ** ** *** Stochastic Pruning [38] Design -Adversarial Training ** ** ** ** Distillation [132] Design -Architecture * ** ** *** Parseval Networks…”
Section: Defense Defense Strategy Defense Performancementioning
confidence: 99%
“…Not i.i.d hypothesis. A different hypothesis assumes that adversarial examples lie off the data manifold, and are sampled from a different distribution [57,102,116,154]. This hypothesis lead to the proposal of adversarial detection methods (Section 7.1) and the attempt to learn this new distribution with generative models.…”
Section: Hypotheses On the Existence Of Adversarial Examplesmentioning
confidence: 99%
See 1 more Smart Citation