The platform will undergo maintenance on Sep 14 at about 7:45 AM EST and will be unavailable for approximately 2 hours.
2021
DOI: 10.48550/arxiv.2103.13813
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

RA-BNN: Constructing Robust & Accurate Binary Neural Network to Simultaneously Defend Adversarial Bit-Flip Attack and Improve Accuracy

Abstract: Recently developed adversarial weight attack, a.k.a. bit-flip attack (BFA), has shown enormous success in compromising Deep Neural Network (DNN) performance with an extremely small amount of model parameter perturbation. To defend against this threat, we propose RA-BNN that adopts a complete binary (i.e., for both weights and activation) neural network (BNN) to significantly improve DNN model robustness (defined as the number of bit-flips required to degrade the accuracy to as low as a random guess). However, … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(3 citation statements)
references
References 35 publications
(63 reference statements)
0
3
0
Order By: Relevance
“…CIFAR 13 is a dataset having 100 classes of colored images and the CIFAR10 dataset is reduced to 10 classes. Because images are of size 32 × 32 × 3 (32 pixels width, 32 pixels height, 3 color channels), the network input is a 3D tensor of shape (32,32,3). The neural network consists of 5 layers: a convolutional layer having 32 neurons with activation function ReLU, followed by a max pooling of size 2x2, a convolutional layer having 64 neurons with activation function ReLU, a flatten layer, and finally a dense layer of 10 neurons with activation function softmax.…”
Section: Cifar Neural Networkmentioning
confidence: 99%
See 1 more Smart Citation
“…CIFAR 13 is a dataset having 100 classes of colored images and the CIFAR10 dataset is reduced to 10 classes. Because images are of size 32 × 32 × 3 (32 pixels width, 32 pixels height, 3 color channels), the network input is a 3D tensor of shape (32,32,3). The neural network consists of 5 layers: a convolutional layer having 32 neurons with activation function ReLU, followed by a max pooling of size 2x2, a convolutional layer having 64 neurons with activation function ReLU, a flatten layer, and finally a dense layer of 10 neurons with activation function softmax.…”
Section: Cifar Neural Networkmentioning
confidence: 99%
“…While much effort has been devoted to the safety and robustness of deep learning code (see for instance [13,34,28,27,32]) a few studies have been carried out on the effects of rounding error propagation on neural networks. Verifiers such as MIPVerify [36] are designed to check properties of neural networks and measure their robustness.…”
Section: Introductionmentioning
confidence: 99%
“…In response to bit-flip attacks, prior work suggests adding specific constraints on DNN weights during training such as binarization [6], clustering [7], or block reconstruction [8]. Adding such constraints increases the number of bit-flips required to deplete the inference accuracy, however, they do not entirely mitigate the threat.…”
Section: Introductionmentioning
confidence: 99%