2020
DOI: 10.1007/978-3-030-45237-7_5
|View full text |Cite
|
Sign up to set email alerts
|

How Many Bits Does it Take to Quantize Your Neural Network?

Abstract: Quantization converts neural networks into low-bit fixed-point computations which can be carried out by efficient integer-only hardware, and is standard practice for the deployment of neural networks on real-time embedded devices. However, like their real-numbered counterpart, quantized networks are not immune to malicious misclassification caused by adversarial attacks. We investigate how quantization affects a network’s robustness to adversarial attacks, which is a formal verification question. We show that … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
38
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5

Relationship

0
5

Authors

Journals

citations
Cited by 24 publications
(38 citation statements)
references
References 18 publications
0
38
0
Order By: Relevance
“…Most of these lines of work have focused on non-quantized DNNs. Verification of quantized DNNs is PSPACEhard [24], and requires different tools than the ones used for their non-quantized counterparts [18]. Our technique extends an existing line of SMT-based verifiers to support also the sign activation functions needed for verifying BNNs; and these new activations can be combined with various other layers.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Most of these lines of work have focused on non-quantized DNNs. Verification of quantized DNNs is PSPACEhard [24], and requires different tools than the ones used for their non-quantized counterparts [18]. Our technique extends an existing line of SMT-based verifiers to support also the sign activation functions needed for verifying BNNs; and these new activations can be combined with various other layers.…”
Section: Related Workmentioning
confidence: 99%
“…However, most of these approaches have not focused on binarized neural networks, although they are just as vulnerable to safety and security concerns as other DNNs. Recent work has shown that verifying quantized neural networks is PSPACE-hard [24], and that it requires different methods than the ones used for verifying non-quantized DNNs [18]. The few existing approaches that do handle binarized networks focus on the strictly binarized case, i.e., on networks where all components are binary, and verify them using a SAT solver encoding [29,43].…”
Section: Introductionmentioning
confidence: 99%
“…Existing techniques for quantized DNNs are mostly based on constraint solving, in particular, SAT/SMT solving [12,33,45,46]. Following this line, verification of BNNs with ternary weights [28,48] and quantized DNNs with multiple bits [7,22,24] were also studied. Recently, the SMT-based framework Marabou for real-numbered DNNs [31] has also been extended to support BNNs [1].…”
Section: Related Workmentioning
confidence: 99%
“…Various formal techniques and heuristics have been proposed to analyze DNNs and interpret their behaviors, most of which focus on real-numbered DNNs only. Verification of quantized DNNs has not been thoroughly explored so far, although recent results have highlighted its importance: it was shown that a quantized DNN does not necessarily preserve the properties satisfied by the realnumbered DNN before quantization [14,22]. Indeed, the fixed-point number semantics effectively yields a discrete state space for the verification of quantized DNNs whereas real-numbered DNNs feature a continuous state space.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation