In the past few years we have seen a surge in the theory of finite Markov chains, by way of new techniques to bounding the convergence to stationarity. This includes functional techniques such as logarithmic Sobolev and Nash inequalities, refined spectral and entropy techniques, and isoperimetric techniques such as the average and blocking conductance and the evolving set methodology. We attempt to give a more or less self-contained treatment of some of these modern techniques, after reviewing several preliminaries. We also review classical and modern lower bounds on mixing times. There have been other important contributions to this theory such as variants on coupling techniques and decomposition methods, which are not included here; our choice was to keep the analytical methods as the theme of this presentation. We illustrate the strength of the main techniques by way of simple examples, a recent result on the Pollard Rho random walk to compute the discrete logarithm, as well as with an improved analysis of the Thorp shuffle.
On complete, non-compact manifolds and infinite graphs, Faber-Krahn inequalities have been used to estimate the rate of decay of the heat kernel. We develop this technique in the setting of finite Markov chains, proving upper and lower L ∞ mixing time bounds via the spectral profile. This approach lets us recover and refine previous conductance-based bounds of mixing time (including the Morris-Peres result), and in general leads to sharper estimates of convergence rates. We apply this method to several models including groups with moderate growth, the fractal-like Viscek graphs, and the product group Za × Z b , to obtain tight bounds on the corresponding mixing times.2000 Mathematics Subject Classification. 60,68.
The notion of conductance introduced by Jerrum and Sinclair [8] has been widely used to prove rapid mixing of Markov Chains. Here we introduce a bound that extends this in two directions. First, instead of measuring the conductance of the worst subset of states, we bound the mixing time by a formula that can be thought of as a weighted average of the Jerrum-Sinclair bound (where the average is taken over subsets of states with different sizes). Furthermore, instead of just the conductance, which in graph theory terms measures edge expansion, we also take into account node expansion. Our bound is related to the logarithmic Sobolev inequalities, but it appears to be more flexible and easier to compute.In the case of random walks in convex bodies, we show that this new bound is better than the known bounds for the worst case. This saves a factor of O(n) in the mixing time bound, which is incurred in all proofs as a "penalty" for a "bad start". We show that in a convex body in IR n , with diameter D, random walk with steps in a ball with radius δ mixes in O * (nD 2 /δ 2 ) time (if idle steps at the boundary are not counted). This gives an O * (n 3 ) sampling algorithm after appropriate preprocessing, improving the previous bound of O * (n 4 ).The application of the general conductance bound in the geometric setting depends on an improved isoperimetric inequality for convex bodies.
We show a Birthday Paradox for self-intersections of Markov chains with uniform stationary distribution. As an application, we analyze Pollard's Rho algorithm for finding the discrete logarithm in a cyclic group G and find that if the partition in the algorithm is given by a random oracle, then with high probability a collision occurs in Θ( |G|) steps. Moreover, for the parallelized distinguished points algorithm on J processors we find that Θ( |G|/J) steps suffices. These are the first proofs of the correct order bounds which do not assume that every step of the algorithm produces an i.i.d. sample from G.
The mixing properties of several Markov chains to sample from configurations of a hard-core model have been examined. The model is familiar in the statistical physics of the liquid state and consists of a set of n nonoverlapping particle balls of radius r * in a d-dimensional hypercube. Starting from an initial configuration, standard Markov chain monte carlo methods may be employed to generate a configuration according to a probability distribution of interest by choosing a trial state and accepting or rejecting the trial state as the next configuration of the Markov chain according to the Metropolis filter. Procedures to generate a trial state include moving a single particle globally within the hypercube, moving a single particle locally, and moving multiple particles at once. We prove that (i) in a d-dimensional system a single-particle globalmove Markov chain is rapidly mixing as long as the density is sufficiently low, (ii) in a one-dimensional system a single-particle local-move Markov chain is rapidly mixing for arbitrary density as long as the local moves are in a sufficiently small neighborhood of the original particle, and (iii) the one-dimensional system can be related to a convex body, thus establishing that certain multiple-particle local-move Markov chains mix rapidly. Difficulties extending this work are also discussed.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.