20th Annual IEEE Symposium on Logic in Computer Science (LICS' 05)
DOI: 10.1109/lics.2005.39
|View full text |Cite
|
Sign up to set email alerts
|

Quantitative Analysis of Probabilistic Pushdown Automata: Expectations and Variances

Abstract: Probabilistic pushdown automata (pPDA) have been identified as a natural model for probabilistic programs with recursive procedure calls. Previous works considered the decidability and complexity of the model-checking problem for pPDA and various probabilistic temporal logics. In this paper we concentrate on computing the expected values and variances of various random variables defined over runs of a given probabilistic pushdown automaton. In particular, we show how to compute the expected accumulated reward… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

1
61
0

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 44 publications
(62 citation statements)
references
References 16 publications
1
61
0
Order By: Relevance
“…Theorem 8 (see [23]) Let ∆ = (Q,Γ , → , Prob) be a pPDA, pα ∈ Q × Γ * a configuration of ∆ , C a regular set of configurations of ∆ represented by a DFA A such that P(Reach(pα, C )) > 0, and f a simple reward function. Further, let ρ > 0 and ε > 0 be rational constants.…”
Section: Computing and Approximating The Expected Total Accumulated Rmentioning
confidence: 99%
See 3 more Smart Citations
“…Theorem 8 (see [23]) Let ∆ = (Q,Γ , → , Prob) be a pPDA, pα ∈ Q × Γ * a configuration of ∆ , C a regular set of configurations of ∆ represented by a DFA A such that P(Reach(pα, C )) > 0, and f a simple reward function. Further, let ρ > 0 and ε > 0 be rational constants.…”
Section: Computing and Approximating The Expected Total Accumulated Rmentioning
confidence: 99%
“…Thus, we can also express the "expected contribution" of Z to the total reward accumulated along such a path. These considerations lead to a system of equations similar to Expect ∆ (we refer to [23] for details). Hence, Theorem 8 holds also for linear reward functions without any change.…”
Section: Computing and Approximating The Expected Total Accumulated Rmentioning
confidence: 99%
See 2 more Smart Citations
“…They are formally equivalent to probabilistic Pushdown Systems (pPDSs) ( [2,3]), and they define a class of infinite-state Markov chains that generalize a number of well studied stochastic models such as Stochastic Context-Free Grammars (SCFGs) and Multi-Type Branching Processes. In a series of recent papers ( [4,5,6,7]), the second author and M. Yannakakis have developed algorithms for analysis and model checking of RMCs and their controlled and game extensions: 1-exit Recursive Markov Decision Processes (1-RMDPs) and 1-exit Recursive Simple Stochastic Games (1-RSSGs).…”
Section: Introductionmentioning
confidence: 99%