2021
DOI: 10.1609/aaai.v35i13.17372
|View full text |Cite
|
Sign up to set email alerts
|

On the Verification of Neural ODEs with Stochastic Guarantees

Abstract: We show that Neural ODEs, an emerging class of time-continuous neural networks, can be verified by solving a set of global-optimization problems. For this purpose, we introduce Stochastic Lagrangian Reachability (SLR), an abstraction-based technique for constructing a tight Reachtube (an over-approximation of the set of reachable states over a given time-horizon), and provide stochastic guarantees in the form of confidence intervals for the Reachtube bounds. SLR inherently avoids the infamous wrapping effect (… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 13 publications
(4 citation statements)
references
References 49 publications
0
4
0
Order By: Relevance
“…Moreover, we speculate that inferring causality from ODE-based networks might be more straightforward than a closed-form solution 24 . It would also be beneficial to assess whether verifying a continuous neural flow 35 is more tractable by using an ODE representation of the system or its closed form.…”
Section: What Are the Limitations Of Cfcs?mentioning
confidence: 99%
“…Moreover, we speculate that inferring causality from ODE-based networks might be more straightforward than a closed-form solution 24 . It would also be beneficial to assess whether verifying a continuous neural flow 35 is more tractable by using an ODE representation of the system or its closed form.…”
Section: What Are the Limitations Of Cfcs?mentioning
confidence: 99%
“…Similar to RNNs, neural ODEs are also deep learning models with "memory", which makes them suitable to learn time-series data, but are also applicable to other tasks such as continuous normalizing flows (CNF) and image classification [11,61]. However, existing work is limited to a stochastic reachability approach [27,28], reachability approaches using star and zonotope reachability methods for a general class of neural ODEs (GNODE) with continuous and discrete time layers [52], and GAINS [89], which leverages ODE-solver information to discretize the models using a computation graph that represent all possible trajectories from a given input to accelerate their bound propagation method. However, one of the main challenges is to find a framework that is able to verify several of these models successfully.…”
Section: Related Workmentioning
confidence: 99%
“…Neural networks have been shown to generalize well to unseen test data on a large range of tasks, despite achieving zero loss on training data (Zhang et al, 2017(Zhang et al, , 2021. But this performance is useless if neural networks cannot be used in practice due to safety and security issues (Gruenbacher et al, 2022;Grunbacher et al, 2021;Lechner et al, 2020Xiao et al, 2021Xiao et al, , 2022. A fundamental question in the security of neural networks is how much information is leaked via this training procedure, that is, can adversaries with access to trained models, or predictions from a model, infer what data was used to train the model?…”
Section: Introductionmentioning
confidence: 99%