Abstract. Given a parametric Markov model, we consider the problem of computing the rational function expressing the probability of reaching a given set of states. To attack this principal problem, Daws has suggested to first convert the Markov chain into a finite automaton, from which a regular expression is computed. Afterwards, this expression is evaluated to a closed form function representing the reachability probability. This paper investigates how this idea can be turned into an effective procedure. It turns out that the bottleneck lies in the growth of the regular expression relative to the number of states (n Θ(log n) ). We therefore proceed differently, by tightly intertwining the regular expression computation with its evaluation. This allows us to arrive at an effective method that avoids this blow up in most practical cases. We give a detailed account of the approach, also extending to parametric models with rewards and with non-determinism. Experimental evidence is provided, illustrating that our implementation provides meaningful insights on non-trivial models.
The Markov Reward Model Checker (MRMC) is a software tool for verifying properties over probabilistic models. It supports PCTL and CSL model checking, and their reward extensions. Distinguishing features of MRMC are its support for computing time-and reward-bounded reachability probabilities, (property-driven) bisimulation minimization, and precise on-the-fly steady-state detection. Recent tool features include time-bounded reachability analysis for uniform CTMDPs and CSL model checking by discrete-event simulation. This paper presents the tool's current status and its implementation details. This research was performed as part of the MC=MC project financed by the Netherlands Organization for Scientific Research (NWO) and the DFG Research Training Group 623 on Leistungsgarantien für Rechnersysteme. We thank Maneesh Khattri (Oxford Univ.), Christina Jansen (RWTH Aachen), and Tim Kemna (Univ. Twente) for their implementation efforts.
We provide the first solution for model-free reinforcement learning of ω-regular objectives for Markov decision processes (MDPs). We present a constructive reduction from the almost-sure satisfaction of ω-regular objectives to an almostsure reachability problem, and extend this technique to learning how to control an unknown model so that the chance of satisfying the objective is maximized. A key feature of our technique is the compilation of ω-regular properties into limitdeterministic Büchi automata instead of the traditional Rabin automata; this choice sidesteps difficulties that have marred previous proposals. Our approach allows us to apply model-free, off-the-shelf reinforcement learning algorithms to compute optimal strategies from the observations of the MDP. We present an experimental evaluation of our technique on benchmark learning problems.An ω-word w on an alphabet Σ is a function w : N → Σ. We abbreviate w(i) by w i . The set of ω-words on Σ is written Σ ω and a subset of Σ ω is an ω-language on Σ.A probability distribution over a finite set S is a function d : S→[0, 1] such that s∈S d(s) = 1. Let D(S) denote the set of all discrete distributions over S. We say a distribution d ∈ D(S) is a point distribution if d(s)=1 for some s ∈ S. For a distribution d ∈ D(S) we write supp(d) def = {s ∈ S : d(s) > 0}.
The Markov Reward Model Checker (MRMC) is a software tool for verifying properties over probabilistic models. It supports PCTL and CSL model checking, and their reward extensions. Distinguishing features of MRMC are its support for computing time-and reward-bounded reachability probabilities, (property-driven) bisimulation minimization, and precise on-the-fly steady-state detection. Recent tool features include time-bounded reachability analysis for uniform CTMDPs and CSL model checking by discrete-event simulation. This paper presents the tool's current status and its implementation details.
The formal analysis of critical systems is supported by a vast space of modelling formalisms and tools. The variety of incompatible formats and tools however poses a significant challenge to practical adoption as well as continued research. In this paper, we propose the Jani model format and tool interaction protocol. The format is a metamodel based on networks of communicating automata and has been designed for ease of implementation without sacrificing readability. The purpose of the protocol is to provide a stable and uniform interface between tools such as model checkers, transformers, and user interfaces. Jani uses the Json data format, inheriting its ease of use and inherent extensibility. Jani initially targets, but is not limited to, quantitative model checking. Several existing tools now support the verification of Jani models, and automatic converters from a diverse set of higher-level modelling languages have been implemented. The ultimate purpose of Jani is to simplify tool development, encourage research cooperation, and pave the way towards a future competition in quantitative model checking.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.