2020
DOI: 10.1609/aaai.v34i04.6105
|View full text |Cite
|
Sign up to set email alerts
|

Towards Certificated Model Robustness Against Weight Perturbations

Abstract: This work studies the sensitivity of neural networks to weight perturbations, firstly corresponding to a newly developed threat model that perturbs the neural network parameters. We propose an efficient approach to compute a certified robustness bound of weight perturbations, within which neural networks will not make erroneous outputs as desired by the adversary. In addition, we identify a useful connection between our developed certification method and the problem of weight quantization, a popular model comp… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
14
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
6
1
1

Relationship

2
6

Authors

Journals

citations
Cited by 21 publications
(14 citation statements)
references
References 18 publications
0
14
0
Order By: Relevance
“…Multiple structured pruning approaches exist based on their pruning dimensions, including filter pruning (that prunes the whole filter), channel pruning (that prunes channels), column pruning (that prunes the same location in each filter of each layer), and connectivity and pattern pruning (that prunes both the channels and certain locations in each kernel simultaneously) Ma et al, 2019]. Despite the differences of these structured pruning methods, we support them on a uniform framework based on Alternating Direction Method of Multipliers (ADMM) [Boyd et al, 2011;Zhao et al, 2019b;Weng et al, 2020]. In general, the pruning optimization problem is formulated as,…”
Section: Structured Model Pruningmentioning
confidence: 86%
See 1 more Smart Citation
“…Multiple structured pruning approaches exist based on their pruning dimensions, including filter pruning (that prunes the whole filter), channel pruning (that prunes channels), column pruning (that prunes the same location in each filter of each layer), and connectivity and pattern pruning (that prunes both the channels and certain locations in each kernel simultaneously) Ma et al, 2019]. Despite the differences of these structured pruning methods, we support them on a uniform framework based on Alternating Direction Method of Multipliers (ADMM) [Boyd et al, 2011;Zhao et al, 2019b;Weng et al, 2020]. In general, the pruning optimization problem is formulated as,…”
Section: Structured Model Pruningmentioning
confidence: 86%
“…The super resolution model mainly utilizes residual blocks with wider activation and linear low-rank convolution [Yu et al, 2018] trained on the DIV2K [Timofte et al, 2017] dataset. With structured pruning and compiler optimization, we implement the models on a Samsung Galaxy S10 mobile phone.…”
Section: Experiments and Demonstrationsmentioning
confidence: 99%
“…ADVBET is similar in spirit to training on adversarial examples which received considerable attention recently [20], [21], [45], [46]. Weight Robustness: Only few works consider weight robustness: [47] certify the robustness of weights with respect to L ∞ perturbations and [48] study Gaussian noise on weights. [11], [13] consider identifying and (adversarially) flipping few vulnerable bits in quantized weights.…”
Section: Low-voltage Random Bit Errors In Dnn Acceleratorsmentioning
confidence: 99%
“…This is particularly pronounced for low-precision, e.g., m = 4bits. ±1 47. 32.05 ±6 68.65 ±9.23 CLIPPING 0.1 4.82 6.95 ±0.24 8.93 ±0.46 12.22 ±1.29 PLCLIPPING 0.25 4.96 6.21 ±0.16 7.04 ±0.28 8.14 ±0.49 RANDBET 0.1 p=0.1 4.72 6.74 ±0.29 8.53 ±0.58 11.40 ±1.27 RANDBET 0.1 p=1 4.90 6.36 ±0.17 7.41 ±0.29 8.65 ±0.37 PLRANDBET 0.25 p=0.1 4.49 5.80 ±0.16 6.65 ±0.22 7.59 ±0.34 PLRANDBET 0.25 p=1 4.62 5.62 ±0.13 6.36 ±0.2 7.02 ±0.27 4bit CLIPPING 0.1 5.29 7.71 ±0.36 10.62 ±1.08 15.79 ±2.54 PLCLIPPING 0.25 4.63 6.15 ±0.16 7.34 ±0.33 8.70 ±0.62 RANDBET 0.1 p=1 5.39 7.04 ±0.21 8.34 ±0.42 9.77 ±0.81 PLRANDBET 0.25 p=1 4.83 5.95 ±0.12 6.65 ±0.19 7.48 ±0.32…”
mentioning
confidence: 99%
“…Recently, maliciously altered weights are used to introduce of a specific backdoor [26,35]. Few works exist to defend malicious change to the weights in general, not only related to backdoor introduction [76,87].…”
Section: Adversarial Machine Learningmentioning
confidence: 99%