The Lovász Local Lemma (LLL) is a powerful tool that gives sufficient conditions for avoiding all of a given set of "bad" events, with positive probability. A series of results have provided algorithms to efficiently construct structures whose existence is non-constructively guaranteed by the LLL, culminating in the recent breakthrough of Moser and Tardos [2010] for the full asymmetric LLL. We show that the output distribution of the Moser-Tardos algorithm well-approximates the conditional LLL-distribution, the distribution obtained by conditioning on all bad events being avoided. We show how a known bound on the probabilities of events in this distribution can be used for further probabilistic analysis and give new constructive and nonconstructive results.We also show that when a LLL application provides a small amount of slack, the number of resamplings of the Moser-Tardos algorithm is nearly linear in the number of underlying independent variables (not events!), and can thus be used to give efficient constructions in cases where the underlying proof applies the LLL to super-polynomially many events. Even in cases where finding a bad event that holds is computationally hard, we show that applying the algorithm to avoid a polynomial-sized "core" subset of bad events leads to a desired outcome with high probability. This is shown via a simple union bound over the probabilities of noncore events in the conditional LLL-distribution, and automatically leads to simple and efficient Monte-Carlo (and in most cases RNC) algorithms. We demonstrate this idea on several applications. We give the first constant-factor approximation algorithm for the Santa Claus problem by making a LLL-based proof of Feige constructive. We provide Monte Carlo algorithms for acyclic edge coloring, nonrepetitive graph colorings, and Ramsey-type graphs. In all these applications, the algorithm falls directly out of the non-constructive LLL-based proof. Our algorithms are very simple, often provide better bounds than previous algorithms, and are in several cases the first efficient algorithms known.As a second type of application we show that the properties of the conditional LLL-distribution can be used in cases beyond the critical dependency threshold of the LLL: avoiding all bad events is impossible in these cases. As the first (even nonconstructive) result of this kind, we show that by sampling a selected smaller core from the LLL-distribution, we can avoid a fraction of bad events that is higher than the expectation. MAX k-SAT is an illustrative example of this.
We provide the first capacity approaching coding schemes that robustly simulate any interactive protocol over an adversarial channel that corrupts any ǫ fraction of the transmitted symbols. Our coding schemes achieve a communication rate of 1 − O( ǫ log log 1/ǫ) over any adversarial channel. This can be improved to 1 − O( √ ǫ) for random, oblivious, and computationally bounded channels, or if parties have shared randomness unknown to the channel. Surprisingly, these rates exceed 1 the 1 − Ω( H(ǫ)) = 1 − Ω( ǫ log 1/ǫ) interactive channel capacity bound which [Kol and Raz; STOC'13] recently proved for random errors. We conjecture 1 − Θ( ǫ log log 1/ǫ) and 1 − Θ( √ ǫ) to be the optimal rates for their respective settings and therefore to capture the interactive channel capacity for random and adversarial errors.In addition to being very communication efficient, our randomized coding schemes have multiple other advantages. They are computationally efficient, extremely natural, and significantly simpler than prior (non-capacity approaching) schemes. In particular, our protocols do not employ any coding but allow the original protocol to be performed as-is, interspersed only by short exchanges of hash values. When hash values do not match, the parties backtrack. Our approach is, as we feel, by far the simplest and most natural explanation for why and how robust interactive communication in a noisy environment is possible.1 Our protocols work for the standard setting in which the input protocol is alternating and the simulation has an alternating or non-adaptive, i.e., fixed, communication order. The impossibility result of [12] does not hold for alternating input protocols. Instead, an input protocol with a more complex communication order is assumed while the simulations are restricted to be non-adaptive. We point out that insisting on non-adaptive simulations is too restrictive for general input protocols: Independently of the amount of noise most (non-alternating) input protocols cannot be simulated robustly in a non-adaptive manner, i.e., a rate of 1 − o(1) is impossible even if the channel introduces merely a single random error. The 1 − O( H(ǫ))-rate coding scheme of [12] avoids this barrier by restricting the input protocols that can be simulated. Our coding scheme naturally works for any input protocol by allowing adaptive coding schemes as introduced in [11].
Distributed computing models typically assume reliable communication between processors. While such assumptions often hold for engineered networks, e.g., due to underlying error correction protocols, their relevance to biological systems, wherein messages are often distorted before reaching their destination, is quite limited. In this study we take a first step towards reducing this gap by rigorously analyzing a model of communication in large anonymous populations composed of simple agents which interact through short and highly unreliable messages.We focus on the broadcast problem and the majority-consensus problem. Both are fundamental information dissemination problems in distributed computing, in which the goal of agents is to converge to some prescribed desired opinion. We initiate the study of these problems in the presence of communication noise. Our model for communication is extremely weak and follows the push gossip communication paradigm: In each round each agent that wishes to send information delivers a message to a random anonymous agent. This communication is further restricted to contain only one bit (essentially representing an opinion). Lastly, the system is assumed to be so noisy that the bit in each message sent is flipped independently with probability 1/2 − ǫ, for some small ǫ > 0.Even in this severely restricted, stochastic and noisy setting we give natural protocols that solve the noisy broadcast and the noisy majority-consensus problems efficiently. Our protocols run in O(log n/ǫ 2 ) rounds and use O(n log n/ǫ 2 ) messages/bits in total, where n is the number of agents. These bounds are asymptotically optimal and, in fact, are as fast and message efficient as if each agent would have been simultaneously informed directly by an agent that knows the prescribed desired opinion. Our efficient, robust, and simple algorithms suggest balancing between silence and transmission, synchronization, and majority-based decisions as important ingredients towards understanding collective communication schemes in anonymous and noisy populations.
The Lovász Local Lemma [5] (LLL) is a powerful result in probability theory that states that the probability that none of a set of bad events happens is nonzero if the probability of each event is small compared to the number of events that depend on it. It is often used in combination with the probabilistic method for non-constructive existence proofs. A prominent application is to k-CNF formulas, where LLL implies that, if every clause in the formula shares variables with at most d ≤ 2 k /e other clauses then such a formula has a satisfying assignment. Recently, a randomized algorithm to efficiently construct a satisfying assignment was given by Moser [12]. Subsequently Moser and Tardos [13] gave a randomized algorithm to construct the structures guaranteed by the LLL in a very general algorithmic framework. We address the main problem left open by Moser and Tardos of derandomizing these algorithms efficiently. Specifically, for a k-CNF formula with m clauses and d ≤ 2 k/(1+ǫ) /e for some ǫ ∈ (0, 1), we give an algorithm that finds a satisfying assignment in timeÕ(m 2(1+1/ǫ) ). This improves upon the deterministic algorithms of Moser and of Moser-Tardos with running time m Ω(k 2 ) which is superpolynomial for k = ω(1) and upon other previous algorithms which work only for d ≤ 2 k/16 /e. Our algorithm works efficiently for a general version of LLL under the algorithmic framework of Moser and Tardos [13], and is also parallelizable, i.e., has polylogarithmic running time using polynomially many processors.
The Lovász Local Lemma (LLL) is a powerful tool that gives sufficient conditions for avoiding all of a given set of "bad" events, with positive probability. A series of results have provided algorithms to efficiently construct structures whose existence is non-constructively guaranteed by the LLL, culminating in the recent breakthrough of Moser & Tardos. We show that the output distribution of the Moser-Tardos algorithm well-approximates the conditional LLL-distribution -the distribution obtained by conditioning on all bad events being avoided. We show how a known bound on the probabilities of events in this distribution can be used for further probabilistic analysis and give new constructive and non-constructive results.We also show that when an LLL application provides a small amount of slack, the number of resamplings of the Moser-Tardos algorithm is nearly linear in the number of underlying independent variables (not events!), and can thus be used to give efficient constructions in cases where the underlying proof applies the LLL to super-polynomially many events. Even in cases where finding a bad event that holds is computationally hard, we show that applying the algorithm to avoid a polynomial-sized "core" subset of bad events leads to a desired outcome with high probability.We demonstrate this idea on several applications. We give the first constant-factor approximation algorithm for the Santa Claus problem by making an LLL-based proof of Feige constructive. We provide Monte Carlo algorithms for acyclic edge coloring, non-repetitive graph colorings, and Ramsey-type graphs. In all these applications the algorithm falls directly out of the non-constructive LLLbased proof. Our algorithms are very simple, often provide better bounds than previous algorithms, and are in several cases the first efficient algorithms known.As a second type of application we consider settings beyond the critical dependency threshold of the LLL: avoiding all bad events is impossible in these cases. As the first (even non-constructive) result of this kind, we show that by sampling from the LLL-distribution of a selected smaller core, we can avoid a fraction of bad events that is higher Part of this work was done while visiting the University of Maryland.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2023 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.