2008
DOI: 10.1016/j.artint.2007.12.002
|View full text |Cite
|
Sign up to set email alerts
|

Reachability analysis of uncertain systems using bounded-parameter Markov decision processes

Abstract: Verification of reachability properties for probabilistic systems is usually based on variants of Markov processes. Current methods assume an exact model of the dynamic behavior and are not suitable for realistic systems that operate in the presence of uncertainty and variability. This research note extends existing methods for Bounded-parameter Markov Decision Processes (BMDPs) to solve the reachability problem. BMDPs are a generalization of MDPs that allows modeling uncertainty. Our results show that interva… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
24
0

Year Published

2012
2012
2023
2023

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 32 publications
(24 citation statements)
references
References 18 publications
0
24
0
Order By: Relevance
“…Secondly, from the side of algorithmic developments, several verification methods for uncertain Markov models have been proposed. The problems of computing reachability probabilities and expected total reward for IMC s and IMDP s were first investigated in [10,36]. Then, several of their PCTL and LTL model checking algorithms were introduced in [2,8,10] and [23,33,35], respectively.…”
Section: Introductionmentioning
confidence: 99%
“…Secondly, from the side of algorithmic developments, several verification methods for uncertain Markov models have been proposed. The problems of computing reachability probabilities and expected total reward for IMC s and IMDP s were first investigated in [10,36]. Then, several of their PCTL and LTL model checking algorithms were introduced in [2,8,10] and [23,33,35], respectively.…”
Section: Introductionmentioning
confidence: 99%
“…Interval MDPs [28] (also called Bounded-parameter MDPs [13,37]) address this need by bounding the probabilities of each successor state by an interval instead of a fixed number. In such a model, the transition probabilities are not fully specified and this uncertainty again needs to be resolved nondeterministically.…”
Section: Introductionmentioning
confidence: 99%
“…The objective in this problem is to find the maximum probability that a set of states can be reached from any other state in an mdp. Prior work also solves the maximal reachability probability problem for a bmdp [30]. The key difference for a bmdp is that the expected value (maximum probability) for each state is not a scalar value, but rather a range derived from the transition probability bounds.…”
Section: Product Bmdp and Optimal Policymentioning
confidence: 99%