2021
DOI: 10.1007/978-3-030-81685-8_10
|View full text |Cite
|
Sign up to set email alerts
|

Scalable Polyhedral Verification of Recurrent Neural Networks

Abstract: We present a scalable and precise verifier for recurrent neural networks, called Prover based on two novel ideas: (i) a method to compute a set of polyhedral abstractions for the non-convex and non-linear recurrent update functions by combining sampling, optimization, and Fermat’s theorem, and (ii) a gradient descent based algorithm for abstraction refinement guided by the certification problem that combines multiple abstractions for each neuron. Using Prover, we present the first study of certifying a non-tri… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
15
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
5
2
1

Relationship

1
7

Authors

Journals

citations
Cited by 24 publications
(15 citation statements)
references
References 22 publications
0
15
0
Order By: Relevance
“…We leave as future work a further investigation of variations of ADMM that can improve convergence rates in deep learning-sized problem instances, as well as extensions beyond the LP setting. Furthermore, it would be interesting to extend the proposed method to verification of recurrent neural neworks (RNNs) such as vanilla RNNs, LSTMs 9 , and GRUs 10 [46], [47], [48]. where T f (x) ∈ ∂f (x) denotes a subgradient.…”
Section: Discussionmentioning
confidence: 99%
“…We leave as future work a further investigation of variations of ADMM that can improve convergence rates in deep learning-sized problem instances, as well as extensions beyond the LP setting. Furthermore, it would be interesting to extend the proposed method to verification of recurrent neural neworks (RNNs) such as vanilla RNNs, LSTMs 9 , and GRUs 10 [46], [47], [48]. where T f (x) ∈ ∂f (x) denotes a subgradient.…”
Section: Discussionmentioning
confidence: 99%
“…Following that they represent all the perturbation sets as a hyperrectangle and pass the hyperrectangle through the remaining network using IBP technique [29]. Following the similar direction the work in [88] presents Polyhedral Robustness Verifier of RNNs (PROVER) which represents the perturbations in input data in the form of polyhedral which is passed through a LSTM network to obtain a certifiable verified network for a more general sequential data. In this line [26] proposed robust certified abstract transformer AI2 using zonotopes as abstract representation of input perturbation where transformer minimize the zonotope projections to achieve certified robustness.…”
Section: Robustness By Certificationmentioning
confidence: 99%
“…We also validate our approaches on more complex benchmarks compared with the aforementioned work. Related to abstraction refinement, [25,32] uses the post condition to refine the choice of slopes in forward over-approximations but do not consider the post condition as hard constraint. [37] combines forward abstract interpretation with MILP/LP solving for refinement, but only considers the pre-condition and the refinement is due to the precision gain of the MILP/LP solving.…”
Section: Related Workmentioning
confidence: 99%
“…To produce node, job, and global embeddings, we used neural networks with six fullyconnected layers each, containing [16,8,8,16,8,8] neurons in their layers, respectively. Finally, two neural network with four fully-connected layers, containing [32,16,8,1] neurons respectively, mapped the embeddings to actions-i.e., a node selected for scheduling and a number of executors to assign. Training was performed using the REINFORCE policy-gradient algorithm executed on 16 workers.…”
Section: G Trainingmentioning
confidence: 99%