2022 # Parameter Synthesis in Markov Models: A Gentle Survey

**Abstract:** This paper surveys the analysis of parametric Markov models whose transitions are labelled with functions over a finite set of parameters. These models are symbolic representations of uncountable many concrete probabilistic models, each obtained by instantiating the parameters. We consider various analysis problems for a given logical specification ϕ: do all parameter instantiations within a given region of parameter values satisfy ϕ?, which instantiations satisfy ϕ and which ones do not?, and how can all such…

Help me understand this report

Search citation statements

Paper Sections

Select...

1

1

1

1

Citation Types

0

4

0

Year Published

2023

2023

Publication Types

Select...

5

1

1

Relationship

0

7

Authors

Journals

(4 citation statements)

0

4

0

“…The key of our approach to tackle various synthesis and inference problems on (p)BNs is to exploit model-checking techniques on MCs (Baier & Katoen, 2008;Katoen, 2016;Baier et al, 2018) and synthesis techniques (Junges et al, 2019) on pMCs. To that end, we transform a (p)BN into a (p)MC.…”

confidence: 99%

“…The key of our approach to tackle various synthesis and inference problems on (p)BNs is to exploit model-checking techniques on MCs (Baier & Katoen, 2008;Katoen, 2016;Baier et al, 2018) and synthesis techniques (Junges et al, 2019) on pMCs. To that end, we transform a (p)BN into a (p)MC.…”

confidence: 99%

“…a given objective, e.g., is the probability to reach some states below (or above) a given threshold λ? Due to algorithmic improvements, nowadays Markov models with hundreds of thousands of states and tens or hundreds of parameters are in reach (Dehnert et al, 2015;Quatmann et al, 2016;Gainer et al, 2018;Fang et al, 2021;Heck et al, 2022); for a recent overview see (Junges et al, 2019).…”

confidence: 99%

“…Interval MDPs [25,43,23] and SGs [38] do not allow for dependencies between states and thus cannot model features such as various obstacle positions. Parametric MDPs [2,44,24] assume controllable uncertainty and do not consider robustness of policies.…”

confidence: 99%

“…[42] employs a game-based abstraction approach to efficiently solve problems with specific properties. In [22], finite-state controllers for POMDPs are computed using parameter synthesis for Markov chains [19,21] by applying convex optimization techniques [12,13]. Another work employs machine learning techniques together with formal verification to achieve sound but not optimal solutions [7].…”

confidence: 99%