2022
DOI: 10.1609/aaai.v36i6.20631
|View full text |Cite
|
Sign up to set email alerts
|

GoTube: Scalable Statistical Verification of Continuous-Depth Models

Abstract: We introduce a new statistical verification algorithm that formally quantifies the behavioral robustness of any time-continuous process formulated as a continuous-depth model. Our algorithm solves a set of global optimization (Go) problems over a given time horizon to construct a tight enclosure (Tube) of the set of all process executions starting from a ball of initial states. We call our algorithm GoTube. Through its construction, GoTube ensures that the bounding tube is conservative up to a desired probabil… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
2
2

Relationship

1
7

Authors

Journals

citations
Cited by 8 publications
(4 citation statements)
references
References 24 publications
0
4
0
Order By: Relevance
“…Some notable examples are Sherlock (Dutta, Chen, and Sankaranarayanan 2019) and ReachNN/ReachNN * (Huang et al 2019;Fan et al 2020) which use polynomial approximations to overapproximate the reachable set over a given time horizon, NNV (Tran et al 2020) which is based on abstract interpretation, LRT-NG (Gruenbacher et al 2020) which overapproximates the reachable set as sequence of hyperspheres, or Verisig (Ivanov et al 2019) which reduces the problem to reachability analysis in hybrid systems. Furthermore, Go-Tube (Gruenbacher et al 2021) constructs the reachable set of a deterministic continuous-time system with statistical guarantees about the constructed set overapproximating the true reachable states.…”
Section: Reachability For Deterministic Control Problemsmentioning
confidence: 99%
“…Some notable examples are Sherlock (Dutta, Chen, and Sankaranarayanan 2019) and ReachNN/ReachNN * (Huang et al 2019;Fan et al 2020) which use polynomial approximations to overapproximate the reachable set over a given time horizon, NNV (Tran et al 2020) which is based on abstract interpretation, LRT-NG (Gruenbacher et al 2020) which overapproximates the reachable set as sequence of hyperspheres, or Verisig (Ivanov et al 2019) which reduces the problem to reachability analysis in hybrid systems. Furthermore, Go-Tube (Gruenbacher et al 2021) constructs the reachable set of a deterministic continuous-time system with statistical guarantees about the constructed set overapproximating the true reachable states.…”
Section: Reachability For Deterministic Control Problemsmentioning
confidence: 99%
“…Similar to RNNs, neural ODEs are also deep learning models with "memory", which makes them suitable to learn time-series data, but are also applicable to other tasks such as continuous normalizing flows (CNF) and image classification [11,61]. However, existing work is limited to a stochastic reachability approach [27,28], reachability approaches using star and zonotope reachability methods for a general class of neural ODEs (GNODE) with continuous and discrete time layers [52], and GAINS [89], which leverages ODE-solver information to discretize the models using a computation graph that represent all possible trajectories from a given input to accelerate their bound propagation method. However, one of the main challenges is to find a framework that is able to verify several of these models successfully.…”
Section: Related Workmentioning
confidence: 99%
“…Neural networks have been shown to generalize well to unseen test data on a large range of tasks, despite achieving zero loss on training data (Zhang et al, 2017(Zhang et al, , 2021. But this performance is useless if neural networks cannot be used in practice due to safety and security issues (Gruenbacher et al, 2022;Grunbacher et al, 2021;Lechner et al, 2020Xiao et al, 2021Xiao et al, , 2022. A fundamental question in the security of neural networks is how much information is leaked via this training procedure, that is, can adversaries with access to trained models, or predictions from a model, infer what data was used to train the model?…”
Section: Introductionmentioning
confidence: 99%