Message sequence charts (MSC) andHigh-level MSC (HMSC) is a visual notation for asynchronously communicating processes and a standard of the ITU. They usually represent incomplete specifications of required or forbidden properties of communication protocols. We consider in this paper two basic problems concerning the automated validation of HMSC specifications, namely model-checking and synthesis. We identify natural syntactic restrictions of HMSCs for which we can solve the above questions. We show first that model-checking for globally cooperative (and locally cooperative) HMSCs is decidable within the same complexity as for the restricted class of bounded HMSCs. Furthermore, model-checking local-choice HMSCs turns out to be as efficient as for finite-state (sequential) systems. The study of locally cooperative and local-choice HMSCs is motivated by the synthesis question, i.e., the question of implementing HMSCs through communicating finite-state machines (CFM) with additional message data. We show that locally cooperative and ✩ local-choice HMSCs are always implementable. Furthermore, the implementation of a local-choice HMSC is deadlock-free and of linear size.
A finite-state Markov chain M can be regarded as a linear transform operating on the set of probability distributions over its node set. The iterative applications of M to an initial probability distribution μ 0 will generate a trajectory of probability distributions. Thus, a set of initial distributions will induce a set of trajectories. It is an interesting and useful task to analyze the dynamics of M as defined by this set of trajectories. The novel idea here is to carry out this task in a symbolic framework. Specifically, we discretize the probability value space [0, 1] into a finite set of intervals I = {I 1 , I 2 , . . . , I m }. A concrete probability distribution μ over the node set {1, 2, . . . , n} of M is then symbolically represented as D, a tuple of intervals drawn from I where the ith component of D will be the interval in which μ(i) falls. The set of discretized distributions D is a finite alphabet. Hence, the trajectory, generated by repeated applications of M to an initial distribution, will induce an infinite string over this alphabet. Given a set of initial distributions, the symbolic dynamics of M will then consist of a language of infinite strings L over the alphabet D.Our main goal is to verify whether L meets a specification given as a linear-time temporal logic formula ϕ. In our logic, an atomic proposition will assert that the current probability of a node falls in the interval I from I. If L is an ω-regular language, one can hope to solve our model-checking problem (whether L |= ϕ?) using standard techniques. However, we show that, in general, this is not the case. Consequently, we develop the notion of an -approximation, based on the transient and long-term behaviors of the Markov chain M. Briefly, the symbolic trajectory ξ is an -approximation of the symbolic trajectory ξ iff (1) ξ agrees with ξ during its transient phase; and (2) both ξ and ξ are within an -neighborhood at all times after the transient phase.Our main results are that one can effectively check whether (i) for each infinite word in L, at least one of its -approximations satisfies the given specification; (ii) for each infinite word in L, all its -approximations satisfy the specification. These verification results are strong in that they apply to all finite state Markov chains.
Abstract. We consider the distributed control problem in the setting of Zielonka asynchronous automata. Such automata are compositions of finite processes communicating via shared actions and evolving asynchronously. Most importantly, processes participating in a shared action can exchange complete information about their causal past. This gives more power to controllers, and avoids simple pathological undecidable cases as in the setting of Pnueli and Rosner. We show the decidability of the control problem for Zielonka automata over acyclic communication architectures. We provide also a matching lower bound, which is l-fold exponential, l being the height of the architecture tree.
We consider the standard model of finite two-person zero-sum
Abstract. There are many cases where we want to verify a system that does not have a usable formal model: the model may be missing, out of date, or simply too big to be used. A possible method is to analyze the system while learning the model (black box checking). However, learning may be an expensive task, thus it needs to be guided, e.g., using the checked property or an inaccurate model (adaptive model checking). In this paper, we consider the case where some of the system components are completely specified (white boxes), while others are unknown (black boxes), giving rise to a grey box system. We provide algorithms and lower bounds, as well as experimental results for this model.
A finite state Markov chain M can be regarded as a linear transform operating on the set of probability distributions over its node set. The iterative applications of M to an initial probability distribution µ 0 will generate a trajectory of probability distributions. Thus a set of initial distributions will induce a set of trajectories. It is an interesting and useful task to analyze the dynamics of M as defined by this set of trajectories. The novel idea here is to carry out this task in a symbolic framework. Specifically, we discretize the probability value space [0, 1] into a finite set of intervals I = {I 1 , I 2 ,. .. , Im}. A concrete probability distribution µ over the node set {1, 2,. .. , n} of M is then symbolically represented as D, a tuple of intervals drawn from I where the i th component of D will be the interval in which µ(i) falls. The set of discretized distributions D is a finite alphabet. Hence the trajectory, generated by repeated applications of M to an initial distribution, will induce an infinite string over this alphabet. Given a set of initial distributions, the symbolic dynamics of M will then consist of an infinite language L over D. Our main goal is to verify whether L meets a specification given as a linear time temporal logic formula ϕ. In our logic an atomic proposition will assert that the current probability of a node falls in the interval I from I. Assuming L can be computed effectively, one can hope to solve our model checking problem (whether L |= ϕ?) using standard techniques in case L is an ω-regular language. However we show that in general this is not the case. Consequently, we develop the notion of an-approximation, based on the transient and long term behaviors of the Markov chain M. Briefly, the symbolic trajectory ξ is an-approximation of the symbolic trajectory ξ iff (1) ξ agrees with ξ during its transient phase; and (2) both ξ and ξ are within an-neighborhood at all times after the transient phase. Our main results are that one can effectively check whether (i) for each infinite word in L, at least one of its-approximations satisfies the given specification; (ii) for each infinite word in L, all its-approximations satisfy the specification. These verification results are strong in that they apply to all finite state Markov chains.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.