2014
DOI: 10.1007/978-3-319-11936-6_13
|View full text |Cite
|
Sign up to set email alerts
|

Modelling and Analysis of Markov Reward Automata

Abstract: Abstract. Costs and rewards are important ingredients for many types of systems, modelling critical aspects like energy consumption, task completion, repair costs, and memory usage. This paper introduces Markov reward automata, an extension of Markov automata that allows the modelling of systems incorporating rewards (or costs) in addition to nondeterminism, discrete probabilistic choice and continuous stochastic timing. Rewards come in two flavours: action rewards, acquired instantaneously when taking a trans… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
26
0

Year Published

2015
2015
2021
2021

Publication Types

Select...
9

Relationship

1
8

Authors

Journals

citations
Cited by 27 publications
(26 citation statements)
references
References 32 publications
(42 reference statements)
0
26
0
Order By: Relevance
“…Moreover, Multi-objective model checking is supported, where we straightforwardly extend the value iteration-based approach of [21] to sound value iteration. We also implemented the optimizations given in Sec- all MCs, MDPs, and CTMCs from the PRISM benchmark suite [22], -several case studies from the PRISM website www.prismmodelchecker.org, -Markov automata accompanying IMCA [23], and multi-objective MDPs considered in [21]. In total, 130 model and property instances were considered.…”
Section: Experimental Evaluationmentioning
confidence: 99%
“…Moreover, Multi-objective model checking is supported, where we straightforwardly extend the value iteration-based approach of [21] to sound value iteration. We also implemented the optimizations given in Sec- all MCs, MDPs, and CTMCs from the PRISM benchmark suite [22], -several case studies from the PRISM website www.prismmodelchecker.org, -Markov automata accompanying IMCA [23], and multi-objective MDPs considered in [21]. In total, 130 model and property instances were considered.…”
Section: Experimental Evaluationmentioning
confidence: 99%
“…The generation of the DFT, including the compositional aggregation, is done using the CADP tool set [6]. The generated I/O-IMC can be translated to the Markov Reward Model Checker (MRMC) [10], or to the Interactive Markov Chain Analyzer (IMCA) [8]. Finally, the requested dependability metrics, which are (a) the reliability for one or more mission times T , or (b) the probability on a system failure during an interval [T 1 , T 2 ], or (c) the mean time to failure, can be computed.…”
Section: Dftcalcmentioning
confidence: 99%
“…All these models are extensible with rewards (or dually: costs) to states, and -for non-deterministic models -to actions. Most probabilistic model checkers support Markov chains and/or MDPs; MAs so far have only been supported by few tools [13,14]. Modeling languages.…”
Section: Introductionmentioning
confidence: 99%
“…Markov Automata. As Prism does not support the verification of MAs, we compare Storm with the only other tool capable of verifying MAs: IMCA[13]. We used the models provided by IMCA, results are depicted inFig.…”
mentioning
confidence: 99%