2021
DOI: 10.1007/978-3-030-81685-8_13
|View full text |Cite
|
Sign up to set email alerts
|

PEREGRiNN: Penalized-Relaxation Greedy Neural Network Verifier

Abstract: Neural Networks (NNs) have increasingly apparent safety implications commensurate with their proliferation in real-world applications: both unanticipated as well as adversarial misclassifications can result in fatal outcomes. As a consequence, techniques of formal verification have been recognized as crucial to the design and deployment of safe NNs. In this paper, we introduce a new approach to formally verify the most commonly considered safety specifications for ReLU NNs – i.e. polytopic specifications on th… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
16
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 18 publications
(17 citation statements)
references
References 19 publications
(28 reference statements)
0
16
0
Order By: Relevance
“…Exp. 3) Comparison with general NN verifiers: nnenum [5], Pere-griNN [23] and Alpha-Beta-Crown [33]. All experiments were run within a VMWare Workstation Pro virtual machine (VM) running on a Linux host with 48 hyper-threaded cores and 256 GB of memory.…”
Section: Methodsmentioning
confidence: 99%
See 4 more Smart Citations
“…Exp. 3) Comparison with general NN verifiers: nnenum [5], Pere-griNN [23] and Alpha-Beta-Crown [33]. All experiments were run within a VMWare Workstation Pro virtual machine (VM) running on a Linux host with 48 hyper-threaded cores and 256 GB of memory.…”
Section: Methodsmentioning
confidence: 99%
“…The literature on more general NN verifiers is far richer. These NN verifiers can generally be grouped into three categories: (i) SMT-based methods, which encode the problem into a Satisfiability Modulo Theory problem [12,21,22]; (ii) MILP-based solvers, which directly encode the verification problem as a Mixed Integer Linear Program [3, 6-8, 18, 24, 27]; (iii) Reachability based methods, which perform layer-by-layer reachability analysis to compute the reachable set [5,13,19,20,29,32,36,37]; and (iv) convex relaxations methods [10,23,31,35]. Methods in categories (i) -(iii) tend to suffer from poor scalability, especially relative to convex relaxation methods.…”
Section: Rŝm (X)mentioning
confidence: 99%
See 3 more Smart Citations