2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2020
DOI: 10.1109/cvpr42600.2020.01410
|View full text |Cite
|
Sign up to set email alerts
|

Defending and Harnessing the Bit-Flip Based Adversarial Weight Attack

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

4
54
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 48 publications
(58 citation statements)
references
References 7 publications
4
54
0
Order By: Relevance
“…We impose several reasonable constraints on the adversary. The adversary cannot destroy the integrity of the model, i.e., the attacker cannot attack the training process, such as poisoning attacks [70], [77], backdoor attacks [44], [74], nor directly modify the parameters inside the models, e.g., bit-flips attack [31], [73], [80], fault injection attack [4], [47]. Additionally, we assume that the data preprocessing stage cannot be tampered with by an adversary.…”
Section: A Threat Modelmentioning
confidence: 99%
“…We impose several reasonable constraints on the adversary. The adversary cannot destroy the integrity of the model, i.e., the attacker cannot attack the training process, such as poisoning attacks [70], [77], backdoor attacks [44], [74], nor directly modify the parameters inside the models, e.g., bit-flips attack [31], [73], [80], fault injection attack [4], [47]. Additionally, we assume that the data preprocessing stage cannot be tampered with by an adversary.…”
Section: A Threat Modelmentioning
confidence: 99%
“…Lowprecision fixed-point data type has been a common practice of existing DL accelerator design although the motivation is more on increasing the throughput and energy-efficiency for local inferences in edge applications. To defend against bitflip attacks targeting quantized model weights, binarizationaware and piece-wise clustering methods [182] are used to train the DL classifier. Binarization can mimic bit-flip noises on the weights, while piece-wise clustering can add fixed single bit-width constraint during the training process.…”
Section: Defenses and Countermeasuresmentioning
confidence: 99%
“…Binarization can mimic bit-flip noises on the weights, while piece-wise clustering can add fixed single bit-width constraint during the training process. These two training methods can improve the robustness of ResNet-20 and VGG-11 trained on CIFAR-10 dataset by 19.3× and 480.1× respectively, compared to their normally trained counterparts [182]. Weight reconstruction [183] mitigates the effect of bit-flipping by averaging the errors over a grain of weights followed by quantization and clipping of the weight values in a grain.…”
Section: Defenses and Countermeasuresmentioning
confidence: 99%
“…Several prior defenses against inference-time fault injection attacks suggest adding specific constraints to the model during training. Authors of [7] show that adding a piece-wise clustering constraint to the training objective or performing binarized training can improve resiliency. Follow-up work [8] proposes to locally reconstruct DNN weights during inference to minimize or defuse the effect of the bitwise error caused by the bit flips.…”
Section: B Existing Defensesmentioning
confidence: 99%
“…In response to bit-flip attacks, prior work suggests adding specific constraints on DNN weights during training such as binarization [6], clustering [7], or block reconstruction [8]. Adding such constraints increases the number of bit-flips required to deplete the inference accuracy, however, they do not entirely mitigate the threat.…”
Section: Introductionmentioning
confidence: 99%