We consider programmable matter as a collection of simple computational elements (or particles) with limited (constantsize) memory that self-organize to solve system-wide problems of movement, configuration, and coordination. Here, we focus on the compression problem, in which the particle system gathers as tightly together as possible, as in a sphere or its equivalent in the presence of some underlying geometry. More specifically, we seek fully distributed, local, and asynchronous algorithms that lead the system to converge to a configuration with small perimeter. We present a Markov chain based algorithm that solves the compression problem under the geometric amoebot model, for particle systems that begin in a connected configuration with no holes. The algorithm takes as input a bias parameter λ, where λ > 1 corresponds to particles favoring inducing more lattice triangles within the particle system. We show that for all λ > 5, there is a constant α > 1 such that at stationarity with all but exponentially small probability the particles are α-compressed, meaning the perimeter of the system configuration is at most α • pmin, where pmin is the minimum possible perimeter of the particle system. We additionally prove that the same algorithm can be used for expansion for small values of λ; in particular, for all 0 < λ < √ 2, there is a constant β < 1 such that at stationarity, with all but an ex-* A full version of this paper, including omitted proofs, is
In this paper we develop tools for analyzing the rate at which a reversible Markov chain converges to stationarity. Our techniques are useful when the Markov chain can be decomposed into pieces which are themselves easier to analyze. The main theorems relate the spectral gap of the original Markov chains to the spectral gaps of the pieces. In the first case the pieces are restrictions of the Markov chain to subsets of the state space; the second case treats a Metropolis-Hastings chain whose equilibrium distribution is a weighted average of equilibrium distributions of other Metropolis-Hastings chains on the same state space.
We show that for all sufficiently large d, the uniform proper 3-coloring model (in physics called the 3-state antiferromagnetic Potts model at zero temperature) on Z d admits multiple maximal-entropy Gibbs measures. This is a consequence of the following combinatorial result: if a proper 3-coloring is chosen uniformly from a box in Z d , conditioned on color 0 being given to all the vertices on the boundary of the box which are at an odd distance from a fixed vertex v in the box, then the probability that v gets color 0 is exponentially small in d.The proof proceeds through an analysis of a certain type of cutset separating v from the boundary of the box, and builds on techniques developed by Galvin and Kahn in their proof of phase transition in the hard-core model on Z d .Building further on these techniques, we study local Markov chains for sampling proper 3-colorings of the discrete torus Z d n . We show that there is a constant ρ ≈ 0.22 such that for all even n ≥ 4 and d sufficiently large, if M is a Markov chain on the set of proper 3-colorings of Z d n that updates the color of at most ρn d vertices at each step and whose stationary distribution is uniform, then the mixing time of M (the time taken for M to reach a distribution that is close to uniform, starting from an arbitrary coloring) is essentially exponential in n d−1 .
Monotonic surfaces spanning finite regions of Z d arise in many contexts, including DNA-based self-assembly, card-shuffling and lozenge tilings. We explore how we can sample these surfaces when the distribution is biased to favor higher surfaces. We show that a natural local chain is rapidly mixing with any bias for regions in Z 2 , and for bias λ > d 2 in Z d , when d > 2. Moreover, our bounds on the mixing time are optimal on d-dimensional hyper-cubic regions. The proof uses a geometric distance function and introduces a variant of path coupling in order to handle distances that are exponentially large.
We study the mixing time of a Markov chain M nn on biased permutations, a problem arising in the context of self-organizing lists. In each step, M nn chooses two adjacent elements k, and and exchanges their positions with probability p ,k . Here we define two general classes and give the first proofs that the chain is rapidly mixing for both. We also demonstrate that the chain is not always rapidly mixing.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.