2020
DOI: 10.1007/978-3-030-51074-9_2
|View full text |Cite
|
Sign up to set email alerts
|

An SMT Theory of Fixed-Point Arithmetic

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
11
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
2
1

Relationship

1
6

Authors

Journals

citations
Cited by 13 publications
(11 citation statements)
references
References 36 publications
0
11
0
Order By: Relevance
“…Columns 3,4,and 5 (resp. 6,7,and 8) show the number of adversarial examples, the execution time, and the proportion of adversarial examples in the input region. Column 9 shows the error rate RealNum−EstimatedNum EstimatedNum , where RealNum is from our result, and EstimatedNum is from NPAQ.…”
Section: Robustness Analysismentioning
confidence: 99%
See 2 more Smart Citations
“…Columns 3,4,and 5 (resp. 6,7,and 8) show the number of adversarial examples, the execution time, and the proportion of adversarial examples in the input region. Column 9 shows the error rate RealNum−EstimatedNum EstimatedNum , where RealNum is from our result, and EstimatedNum is from NPAQ.…”
Section: Robustness Analysismentioning
confidence: 99%
“…Existing techniques for quantized DNNs are mostly based on constraint solving, in particular, SAT/SMT solving [12,33,45,46]. Following this line, verification of BNNs with ternary weights [28,48] and quantized DNNs with multiple bits [7,22,24] were also studied. Recently, the SMT-based framework Marabou for real-numbered DNNs [31] has also been extended to support BNNs [1].…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Moving from the idealized mathematical model of ANNs (infinite precision) to their quantized implementation (floating-or fixedpoint) makes the computational problem even harder [12]. Given the interest in deploying extremely quantized ANNs for low-power applications [11], a few recent studies have proposed new fixed-point SMT background theories [2,10].…”
Section: Introductionmentioning
confidence: 99%
“…As a result, specialized verification methods that take quantization into account need to be developed, due to more complex semantics of quantized neural networks. Groundwork on such methods demonstrated that special encodings of networks in terms of Satisfiability Modulo Theories (SMT) (Clark and Cesare 2018) with bitvector (Giacobbe, Henzinger, and Lechner 2020) or fixedpoint (Baranowski et al 2020) theories present a promising approach towards the verification of quantized networks. However, the size of networks that these tools can handle and runtimes of these approaches do not match the efficiency of advanced verification methods developed for standard networks like Reluplex (Katz et al 2017) and Neurify (Wang et al 2018a).…”
Section: Introductionmentioning
confidence: 99%