2021
DOI: 10.1007/978-3-030-83723-5_15
|View full text |Cite
|
Sign up to set email alerts
|

On Correctness, Precision, and Performance in Quantitative Verification

Abstract: Quantitative verification tools compute probabilities, expected rewards, or steady-state values for formal models of stochastic and timed systems. Exact results often cannot be obtained efficiently, so most tools use floating-point arithmetic in iterative algorithms that approximate the quantity of interest. Correctness is thus defined by the desired precision and determines performance. In this paper, we report on the experimental evaluation of these trade-offs performed in QComp 2020: the second friendly com… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
15
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
5
2

Relationship

2
5

Authors

Journals

citations
Cited by 24 publications
(15 citation statements)
references
References 85 publications
0
15
0
Order By: Relevance
“…Instead of returning the distribution over a predicate whether a target state has been visited, the Dice program can return distributions over (bounded) quantities. In the finite horizon case, expected cumulative rewards (that assign to every finite path a bounded quantity rather 14 The semantics can be thought of as applying a uniform scheduler to an underlying MDP where all actions are represented. 15 Recall, Prism semantics require that there are no data races.…”
Section: B3 Discussion On Sampling and Other Propertiesmentioning
confidence: 99%
See 2 more Smart Citations
“…Instead of returning the distribution over a predicate whether a target state has been visited, the Dice program can return distributions over (bounded) quantities. In the finite horizon case, expected cumulative rewards (that assign to every finite path a bounded quantity rather 14 The semantics can be thought of as applying a uniform scheduler to an underlying MDP where all actions are represented. 15 Recall, Prism semantics require that there are no data races.…”
Section: B3 Discussion On Sampling and Other Propertiesmentioning
confidence: 99%
“…There is no clear-cut model checking technique that is superior to others (see QCOMP 2020 [14]). We demonstrate that, while Rubicon is not competitive on some commonly used benchmarks, it improves a modern model checking portfolio approach on a significant set of benchmarks.…”
Section: Empirical Comparisonsmentioning
confidence: 99%
See 1 more Smart Citation
“…However, rounding away from the theoretical fixpoint in the updates of l and u means that we may reach an effective fixpoint-where l and u no longer change because all newly computed values round down/up to the values from the previous iteration-at a point where the relative difference of l(s I ) and u(s I ) is still above . This will happen in practice: In QComp 2020 [6], mcsta participated in the floatingpoint correct track by letting VI run until it reached a fixpoint under the default rounding mode with double precision. In 9 of the 44 benchmark instances that 1 function SR-SII(M = S, sI , T , G, opt, ) Alg.…”
Section: Correctly Rounding Interval Iterationmentioning
confidence: 99%
“…In many cases, we can restrict to rational values, which simplifies the theory and facilitates "exact" algorithms operating on arbitrary-precision rational number datatypes. These algorithms however only work for relatively small models (as shown in the most recent QComp 2020 competition of quantitative verification tools [6]). In this paper, we thus focus on the PMC techniques that scale to large problems: those building upon iterative numerical algorithms, in particular value iteration (VI) [8].…”
Section: Introductionmentioning
confidence: 99%