2021
DOI: 10.26226/morressier.604907f41a80aac83ca25d19
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Automated Termination Analysis of Polynomial Probabilistic Programs

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
2
2

Relationship

2
2

Authors

Journals

citations
Cited by 4 publications
(3 citation statements)
references
References 39 publications
0
3
0
Order By: Relevance
“…Various martingale-based approaches, such as [15,19,20,2,43], aim to synthesize probabilistic loop invariants over R-valued variables, see [49] for a recent survey. Most of these approaches yield invariants for proving almost-sure termination or bounding expected runtimes, [1] employs a CEGIS loop to train a neural network for learning a ranking supermartingale proving positive almost-sure termination of (possibly continuous) probabilistic programs.…”
Section: Related Workmentioning
confidence: 99%
“…Various martingale-based approaches, such as [15,19,20,2,43], aim to synthesize probabilistic loop invariants over R-valued variables, see [49] for a recent survey. Most of these approaches yield invariants for proving almost-sure termination or bounding expected runtimes, [1] employs a CEGIS loop to train a neural network for learning a ranking supermartingale proving positive almost-sure termination of (possibly continuous) probabilistic programs.…”
Section: Related Workmentioning
confidence: 99%
“…Experimental Setup. All our seven examples in Table 1 implement polynomial loop updates and fall in the class of probabilistic loops supported by [30].…”
Section: Experimental Evaluationmentioning
confidence: 99%
“…As such, for each example of Table 1, exact higher-order moments of random loop variables can be computed using the algorithmic approach of [30]. In our work, we use the Polar tool of [30] to derive a finite set M of exact higher-order moments for each PP of Table 1, and we set S M = M to be further used in Algorithm 1. Further, we generate our sampled data (Sample Data ) by executing each PP e = 1000 times and for loop iteration n = 100.…”
Section: Experimental Evaluationmentioning
confidence: 99%