Abstract-We develop a new method for safety verification of stochastic systems based on functions of states termed barrier certificates. Given a stochastic continuous or hybrid system and sets of initial and unsafe states, our method computes an upper bound on the probability that a trajectory of the system reaches the unsafe set, a bound whose validity is proven by the existence of a barrier certificate. For polynomial systems, both the upper bound and its corresponding barrier certificate can be computed using convex optimization, and hence the method is computationally tractable.
Abstract. An important approach to decidability questions for veri®cation algorithms of hybrid systems has been the construction of a bisimulation. Bisimulations are ®nite state quotients whose reachability properties are equivalent to those of the original in®nite state hybrid system. In this paper we introduce the notion of o-minimal hybrid systems, which are initialized hybrid systems whose relevant sets and¯ows are de®nable in an o-minimal theory. We prove that o-minimal hybrid systems always admit ®nite bisimulations. We then present speci®c examples of hybrid systems with complex continuous dynamics for which ®nite bisimulations exist.
This article addresses the problem of verifying the safety of autonomous systems with neural network (NN) controllers. We focus on NNs with sigmoid/tanh activations and use the fact that the sigmoid/tanh is the solution to a quadratic differential equation. This allows us to convert the NN into an equivalent hybrid system and cast the problem as a hybrid system verification problem, which can be solved by existing tools. Furthermore, we improve the scalability of the proposed method by approximating the sigmoid with a Taylor series with worst-case error bounds. Finally, we provide an evaluation over four benchmarks, including comparisons with alternative approaches based on mixed integer linear programming as well as on star sets.
Many resource allocation problems can be formulated as an optimization
problem whose constraints contain sensitive information about participating
users. This paper concerns solving this kind of optimization problem in a
distributed manner while protecting the privacy of user information. Without
privacy considerations, existing distributed algorithms normally consist in a
central entity computing and broadcasting certain public coordination signals
to participating users. However, the coordination signals often depend on user
information, so that an adversary who has access to the coordination signals
can potentially decode information on individual users and put user privacy at
risk. We present a distributed optimization algorithm that preserves
differential privacy, which is a strong notion that guarantees user privacy
regardless of any auxiliary information an adversary may have. The algorithm
achieves privacy by perturbing the public signals with additive noise, whose
magnitude is determined by the sensitivity of the projection operation onto
user-specified constraints. By viewing the differentially private algorithm as
an implementation of stochastic gradient descent, we are able to derive a bound
for the suboptimality of the algorithm. We illustrate the implementation of our
algorithm via a case study of electric vehicle charging. Specifically, we
derive the sensitivity and present numerical simulations for the algorithm.
Through numerical simulations, we are able to investigate various aspects of
the algorithm when being used in practice, including the choice of step size,
number of iterations, and the trade-off between privacy level and
suboptimality.Comment: Submitted to the IEEE Transactions on Automatic Contro
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.