2018
DOI: 10.1007/978-3-319-77935-5_9
|View full text |Cite
|
Sign up to set email alerts
|

Output Range Analysis for Deep Feedforward Neural Networks

Abstract: Deep neural networks (NN) are extensively used for machine learning tasks such as image classification, perception and control of autonomous systems. Increasingly, these deep NNs are also been deployed in high-assurance applications. Thus, there is a pressing need for developing techniques to verify neural networks to check whether certain user-expected properties are satisfied. In this paper, we study a specific verification problem of computing a guaranteed range for the output of a deep neural network given… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
219
0
1

Year Published

2018
2018
2022
2022

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 286 publications
(231 citation statements)
references
References 12 publications
1
219
0
1
Order By: Relevance
“…Thus, comparing against ReluVal on the verifiably-robust benchmarks allows us to evaluate the benefits of learning a verification policy from data. 8 The results of this comparison are shown in Figure 15. As we can see from this figure, ReluVal is still only able to solve between 35-70% of the benchmarks that can be successfully solved by Charon.…”
Section: Impact Of Learning a Verification Policy (Rq3)mentioning
confidence: 99%
See 1 more Smart Citation
“…Thus, comparing against ReluVal on the verifiably-robust benchmarks allows us to evaluate the benefits of learning a verification policy from data. 8 The results of this comparison are shown in Figure 15. As we can see from this figure, ReluVal is still only able to solve between 35-70% of the benchmarks that can be successfully solved by Charon.…”
Section: Impact Of Learning a Verification Policy (Rq3)mentioning
confidence: 99%
“…Huang et al [23] showed a verification framework, based on an SMT solver, which verified robustness with respect to a certain set of functions that can manipulate the input. A few recent papers [8,32,53] use Mixed Integer Linear Programming (MILP) solvers to verify local robustness properties of neural networks. These methods do not use abstraction and do not scale very well, but combining these techniques with abstraction is an interesting area of future work.…”
Section: Related Workmentioning
confidence: 99%
“…Dutta et al [35] also study the automatic estimation of the output-range for deep NNs. A key concept of theirs is that sets of possible inputs are compactly represented by convex polyhedral.…”
Section: Verification and Simulationmentioning
confidence: 99%
“…An exact MILP formulation of a ReLU network can be obtained by programming each ReLU operator with a binary variable and applying the big-M method. This formulation has recently been applied to formal verification [32,33,34,35,36], to count linear regions [37], and to compress DNNs [38]. Common for these works is consideration of a single ReLU network subject to input bounds.…”
Section: Introductionmentioning
confidence: 99%