2021
DOI: 10.1007/978-3-030-81688-9_1
|View full text |Cite
|
Sign up to set email alerts
|

Learning Probabilistic Termination Proofs

Abstract: We present the first machine learning approach to the termination analysis of probabilistic programs. Ranking supermartingales (RSMs) prove that probabilistic programs halt, in expectation, within a finite number of steps. While previously RSMs were directly synthesised from source code, our method learns them from sampled execution traces. We introduce the neural ranking supermartingale: we let a neural network fit an RSM over execution traces and then we verify it over the source code using satisfiability mo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
7
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 14 publications
(9 citation statements)
references
References 44 publications
(49 reference statements)
1
7
0
Order By: Relevance
“…[10] can prove bounds on expected values via symbolic reasoning and Doob's decomposition, which, however, requires user-supplied invariants and hints. [1] employs a CEGIS loop to train a neural network dedicated to learning a ranking supermartingale witnessing UPAST of (possibly continuous) probabilistic programs. They also use counterexamples provided by SMT solvers to guide the learning process.…”
Section: Related Workmentioning
confidence: 99%
“…[10] can prove bounds on expected values via symbolic reasoning and Doob's decomposition, which, however, requires user-supplied invariants and hints. [1] employs a CEGIS loop to train a neural network dedicated to learning a ranking supermartingale witnessing UPAST of (possibly continuous) probabilistic programs. They also use counterexamples provided by SMT solvers to guide the learning process.…”
Section: Related Workmentioning
confidence: 99%
“…This is a qualitative reachability property, in contrast to the quantitative properties studied here. In particular, [3] use a CEGIS-based approach to synthesise neural RSMs.…”
Section: Martingalesmentioning
confidence: 99%
“…Widely applied in many areas of artificial intelligence, neural networks are increasingly used as representations of formal certificates for the correctness of systems. Previous work has applied machine learning to the formal verification of computer programs [3,26,40,43,45,51,61], and of dynamical and control systems [2,14,34,44,46,53].…”
Section: Introductionmentioning
confidence: 99%
“…We are not aware of other data-driven methods for learning probabilistic invariants, but a recent work Abate et al [36] proves probabilistic termination by learning ranking supermartingales from trace data. Our method for learning sub-invariants (Section 6) can be seen as a natural generalization of their approach.…”
Section: Data-driven Invariant Synthesismentioning
confidence: 99%