2018
DOI: 10.48550/arxiv.1810.01279
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Adv-BNN: Improved Adversarial Defense through Robust Bayesian Neural Network

Abstract: We present a new algorithm to train a robust neural network against adversarial attacks. Our algorithm is motivated by the following two ideas. First, although recent work has demonstrated that fusing randomness can improve the robustness of neural networks (Liu et al., 2017), we noticed that adding noise blindly to all the layers is not the optimal way to incorporate randomness. Instead, we model randomness under the framework of Bayesian Neural Network (BNN) to formally learn the posterior distribution of mo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

1
53
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
3
2
2

Relationship

0
7

Authors

Journals

citations
Cited by 33 publications
(54 citation statements)
references
References 12 publications
1
53
0
Order By: Relevance
“…Stochastic defenses. Many methods [7], [8], [9], [16], [17], [18], [19], [20] have been proposed to defend against adversarial attacks by introducing randomness into the classifier. However, they have been broken by a stronger adaptive proxy-gradient-based attack [11], [21], [22].…”
Section: Literature Reviewmentioning
confidence: 99%
See 3 more Smart Citations
“…Stochastic defenses. Many methods [7], [8], [9], [16], [17], [18], [19], [20] have been proposed to defend against adversarial attacks by introducing randomness into the classifier. However, they have been broken by a stronger adaptive proxy-gradient-based attack [11], [21], [22].…”
Section: Literature Reviewmentioning
confidence: 99%
“…The Random Self-Ensemble (RSE) approach introduced random noise layers to make the network stochastic and ensembled the prediction over the randomness to achieve stable performance [16]. While RSE introduced randomness by perturbing the inputs of each layer, another approach called Adv-BNN [17] used Bayesian neural network structure embedded with adversarial training [3] to make the weight of the model stochastic. Moreover, He et al [18] proposed a method called Parametric Noise Injection (PNI) injecting learnable noise on the layerwise weight or inputs.…”
Section: Literature Reviewmentioning
confidence: 99%
See 2 more Smart Citations
“…The work of [18,19,20,21] investigates the relationship between weight sparsity and the robustness of the models against adversarial attacks. Other provable defenses utilize k-nearest neighbor (KNN) [22,23], and Bayesian deep neural network (BNN) [24]. Ensemble methods [25,26,27] played a big influence in this work.…”
mentioning
confidence: 99%