2019
DOI: 10.1007/s00224-019-09921-3
|View full text |Cite
|
Sign up to set email alerts
|

On Approximating the Stationary Distribution of Time-Reversible Markov Chains

Abstract: Approximating the stationary probability of a state in a Markov chain through Markov chain Monte Carlo techniques is, in general, inefficient. Standard random walk approaches requireÕ(τ /π(v)) operations to approximate the probability π(v) of a state v in a chain with mixing time τ , and even the best available techniques still have complexityÕ(τ 1.5 /π(v) 0.5 ); and since these complexities depend inversely on π(v), they can grow beyond any bound in the size of the chain or in its mixing time. In this paper w… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
6
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
2
1

Relationship

3
4

Authors

Journals

citations
Cited by 8 publications
(6 citation statements)
references
References 20 publications
0
6
0
Order By: Relevance
“…Most related is the work of [30], which also try to identify the most relevant parts of the system, however they employ the special structure given by cellu-lar processes to find these regions and estimate the subsequent approximation error. Many other works deal with special cases, such as queueing models [1,18], time-reversible chains [9], or positive rows (all states have a transition to one particular state) [10,12,27]. In contrast, our methods aim to deal with general Markov chains.…”
Section: Related Workmentioning
confidence: 99%
“…Most related is the work of [30], which also try to identify the most relevant parts of the system, however they employ the special structure given by cellu-lar processes to find these regions and estimate the subsequent approximation error. Many other works deal with special cases, such as queueing models [1,18], time-reversible chains [9], or positive rows (all states have a transition to one particular state) [10,12,27]. In contrast, our methods aim to deal with general Markov chains.…”
Section: Related Workmentioning
confidence: 99%
“…Finally, we would mention recent work on the local approximation of the stationary probability of a target state v in a Markov Chain [47], [50], [51], and on the local approximation of a single entry of the solution vector of a linear system [52], [53]. The local approximation of P (v) is a specific but nontrivial case of both, and we hope that our techniques may serve as an entry point for future developments in those directions.…”
Section: Related Workmentioning
confidence: 99%
“…The stationary distribution. Lee et al [23] and Bressan et al [8] studied the question of computing the stationary distribution 𝜋 of a Markov Chain locally. These algorithms take as input any state 𝑣, and answer if the stationary probability of 𝑣 exceeds some Δ ∈ (0, 1) and/or output an estimate of 𝜋 (𝑣).…”
Section: Related Workmentioning
confidence: 99%