Given a relational specification between Boolean inputs and outputs, the goal of Boolean functional synthesis is to synthesize each output as a function of the inputs such that the specification is met. In this paper, we first show that unless some hard conjectures in complexity theory are falsified, Boolean functional synthesis must generate large Skolem functions in the worst-case. Given this inherent hardness, what does one do to solve the problem? We present a two-phase algorithm, where the first phase is efficient both in terms of time and size of synthesized functions, and solves a large fraction of benchmarks. To explain this surprisingly good performance, we provide a sufficient condition under which the first phase must produce correct answers. When this condition fails, the second phase builds upon the result of the first phase, possibly requiring exponential time and generating exponential-sized functions in the worst-case. Detailed experimental evaluation shows our algorithm to perform better than other techniques for a large number of benchmarks.
No abstract
Given a Boolean formula F (X, Y), where X is a vector of outputs and Y is a vector of inputs, the Boolean functional synthesis problem requires us to compute a Skolem function vector Ψ(Y) for X such that F (Ψ(Y), Y) holds whenever ∃X F (X, Y) holds. In this paper, we investigate the relation between the representation of the specification F (X, Y) and the complexity of synthesis. We introduce a new normal form for Boolean formulas, called SynNNF, that guarantees polynomialtime synthesis and also polynomial-time existential quantification for some order of quantification of variables. We show that several normal forms studied in the knowledge compilation literature are subsumed by SynNNF, although SynNNF can be super-polynomially more succinct than them. Motivated by these results, we propose an algorithm to convert a specification in CNF to SynNNF, with the intent of solving the Boolean functional synthesis problem. Experiments with a prototype implementation show that this approach solves several benchmarks beyond the reach of state-of-the-art tools.• We present a new sub-class of negation normal form, called SynNNF, that admits polynomial-time synthesis and quantifier elimination for a set of variables.
A finite-state Markov chain M can be regarded as a linear transform operating on the set of probability distributions over its node set. The iterative applications of M to an initial probability distribution μ 0 will generate a trajectory of probability distributions. Thus, a set of initial distributions will induce a set of trajectories. It is an interesting and useful task to analyze the dynamics of M as defined by this set of trajectories. The novel idea here is to carry out this task in a symbolic framework. Specifically, we discretize the probability value space [0, 1] into a finite set of intervals I = {I 1 , I 2 , . . . , I m }. A concrete probability distribution μ over the node set {1, 2, . . . , n} of M is then symbolically represented as D, a tuple of intervals drawn from I where the ith component of D will be the interval in which μ(i) falls. The set of discretized distributions D is a finite alphabet. Hence, the trajectory, generated by repeated applications of M to an initial distribution, will induce an infinite string over this alphabet. Given a set of initial distributions, the symbolic dynamics of M will then consist of a language of infinite strings L over the alphabet D.Our main goal is to verify whether L meets a specification given as a linear-time temporal logic formula ϕ. In our logic, an atomic proposition will assert that the current probability of a node falls in the interval I from I. If L is an ω-regular language, one can hope to solve our model-checking problem (whether L |= ϕ?) using standard techniques. However, we show that, in general, this is not the case. Consequently, we develop the notion of an -approximation, based on the transient and long-term behaviors of the Markov chain M. Briefly, the symbolic trajectory ξ is an -approximation of the symbolic trajectory ξ iff (1) ξ agrees with ξ during its transient phase; and (2) both ξ and ξ are within an -neighborhood at all times after the transient phase.Our main results are that one can effectively check whether (i) for each infinite word in L, at least one of its -approximations satisfies the given specification; (ii) for each infinite word in L, all its -approximations satisfy the specification. These verification results are strong in that they apply to all finite state Markov chains.
Given a relational specification ϕ(X, Y ), where X and Y are sequences of input and output variables, we wish to synthesize each output as a function of the inputs such that the specification holds. This is called the Boolean functional synthesis problem and has applications in several areas. In this paper, we present the first parallel approach for solving this problem, using compositional and CEGAR-style reasoning as key building blocks. We show by means of extensive experiments that our approach outperforms existing tools on a large class of benchmarks.
A finite state Markov chain M can be regarded as a linear transform operating on the set of probability distributions over its node set. The iterative applications of M to an initial probability distribution µ 0 will generate a trajectory of probability distributions. Thus a set of initial distributions will induce a set of trajectories. It is an interesting and useful task to analyze the dynamics of M as defined by this set of trajectories. The novel idea here is to carry out this task in a symbolic framework. Specifically, we discretize the probability value space [0, 1] into a finite set of intervals I = {I 1 , I 2 ,. .. , Im}. A concrete probability distribution µ over the node set {1, 2,. .. , n} of M is then symbolically represented as D, a tuple of intervals drawn from I where the i th component of D will be the interval in which µ(i) falls. The set of discretized distributions D is a finite alphabet. Hence the trajectory, generated by repeated applications of M to an initial distribution, will induce an infinite string over this alphabet. Given a set of initial distributions, the symbolic dynamics of M will then consist of an infinite language L over D. Our main goal is to verify whether L meets a specification given as a linear time temporal logic formula ϕ. In our logic an atomic proposition will assert that the current probability of a node falls in the interval I from I. Assuming L can be computed effectively, one can hope to solve our model checking problem (whether L |= ϕ?) using standard techniques in case L is an ω-regular language. However we show that in general this is not the case. Consequently, we develop the notion of an-approximation, based on the transient and long term behaviors of the Markov chain M. Briefly, the symbolic trajectory ξ is an-approximation of the symbolic trajectory ξ iff (1) ξ agrees with ξ during its transient phase; and (2) both ξ and ξ are within an-neighborhood at all times after the transient phase. Our main results are that one can effectively check whether (i) for each infinite word in L, at least one of its-approximations satisfies the given specification; (ii) for each infinite word in L, all its-approximations satisfy the specification. These verification results are strong in that they apply to all finite state Markov chains.
Abstract. This paper addresses a control problem for probabilistic models in the setting of Markov decision processes (MDP). We are interested in the steady-state control problem which asks, given an ergodic MDP M and a distribution δ goal , whether there exists a (history-dependent randomized) policy π π π ensuring that the steady-state distribution of M under π π π is exactly δ goal . We first show that stationary randomized policies suffice to achieve a given steady-state distribution. Then we infer that the steady-state control problem is decidable for MDP, and can be represented as a linear program which is solvable in PTIME. This decidability result extends to labeled MDP (LMDP) where the objective is a steady-state distribution on labels carried by the states, and we provide a PSPACE algorithm. We also show that a related steady-state language inclusion problem is decidable in EXPTIME for LMDP. Finally, we prove that if we consider MDP under partial observation (POMDP), the steady-state control problem becomes undecidable.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.