The field of numerical analysis has developed numerous benchmarks for evaluating differential and algebraic equation solvers. In this paper, we describe a set of benchmarks commonly used in numerical analysis that may also be effective for evaluating continuous and hybrid systems reachability and verification methods. Many of these examples are challenging and have highly nonlinear differential equations and upwards of tens of dimensions (state variables). Additionally, many examples in numerical analysis are originally encoded as differential algebraic equations (DAEs) with index greater than one or as implicit differential equations (IDEs), which are challenging to model as hybrid automata. We present executable models for ten benchmarks from a test set for initial value problems (IVPs) in the SpaceEx format (allowing for nonlinear equations instead of restricting to affine) and illustrate their conversion to several other formats (dReach, Flow*, and the MathWorks Simulink/Stateflow [SLSF]) using the HyST tool. For some instances, we present successful analysis results using dReach, Flow*, and SLSF. Category: academic Difficulty: low through challenge
Context and OriginsVerification and validation are important tasks that are applied broadly in many fields in recent years such as embedded systems, power electronics, networked control systems, and aerospace systems [4,16,17]. Many different verification methods and tools have been developed for reachability analysis of hybrid systems [2,3,7,14]. The challenges in verification of continuous and hybrid systems are many, and include for example complex nonlinear dynamics, highdimensional state-spaces, and bounded vs. unbounded time. To evaluate novel verification methods and tools, we need to evaluate and test them using a variety of diverse benchmarks, that are ideally standardized. However, these benchmarks are not standardized, so it is difficult to evaluate whether particular state representations (e.g., zonotopes [1], Taylor models [7], support functions [13], polyhedra, hypercubes [5], symbolic/SMT formulas [14], etc.) and verification techniques are superior for different classes of hybrid automata.In this paper, we present a set of ten different, executable benchmarks to aid in the development of a standardized set of benchmarks for the verification community to evaluate verification methods and tools. These benchmarks are derived from a test set for initial value problem (IVP) G.Frehse and M