The Small-Set Expansion Hypothesis (Raghavendra, Steurer, STOC 2010) is a natural hardness assumption concerning the problem of approximating the edge expansion of small sets in graphs. This hardness assumption is closely connected to the Unique Games Conjecture (Khot, STOC 2002). In particular, the Small-Set Expansion Hypothesis implies the Unique Games Conjecture (Raghavendra, Steurer, STOC 2010).Our main result is that the Small-Set Expansion Hypothesis is in fact equivalent to a variant of the Unique Games Conjecture. More precisely, the hypothesis is equivalent to the Unique Games Conjecture restricted to instance with a fairly mild condition on the expansion of small sets. Alongside, we obtain the first strong hardness of approximation results for the Balanced Separator and Minimum Linear Arrangement problems. Before, no such hardness was known for these problems even assuming the Unique Games Conjecture.These results not only establish the Small-Set Expansion Hypothesis as a natural unifying hypothesis that implies the Unique Games Conjecture, all its consequences and, in addition, hardness results for other problems like Balanced Separator and Minimum Linear Arrangement, but our results also show that the Small-Set Expansion Hypothesis problem lies at the combinatorial heart of the Unique Games Conjecture.The key technical ingredient is a new way of exploiting the structure of the Unique Games instances obtained from the Small-Set Expansion Hypothesis via (Raghavendra, Steurer, 2010). This additional structure allows us to modify standard reductions in a way that essentially destroys their local-gadget nature. Using this modification, we can argue about the expansion in the graphs produced by the reduction without relying on expansion properties of the underlying Unique Games instance (which would be impossible for a local-gadget reduction).
We study integrality gaps for SDP relaxations of constraint satisfaction problems, in the hierarchy of SDPs defined by Lasserre. Schoenebeck [25] recently showed the first integrality gaps for these problems, showing that for MAX k-XOR, the ratio of the SDP optimum to the integer optimum may be as large as 2 even after Ω(n) rounds of the Lasserre hierarchy. We show that for the general MAX k-CSP problem over binary domain, the ratio of SDP optimum to the value achieved by the optimal assignment, can be as large as 2 k /2k − even after Ω(n) rounds of the Lasserre hierarchy. For alphabet size q which is a prime, we give a lower bound of q k /q(q − 1)k − for Ω(n) rounds. The method of proof also gives optimal integrality gaps for a predicate chosen at random. We also explore how to translate gaps for CSP into integrality gaps for other problems using reductions, and establish SDP gaps for Maximum Independent Set, Approximate Graph Coloring, Chromatic Number and Minimum Vertex Cover. For Independent Set and Chromatic Number, we show integrality gaps of n/2 O(√ log n log log n) even after 2 Ω(√ log n log log n) rounds. In case of Approximate Graph Coloring, for every constant l, we construct graphs with chromatic number Ω(2 l/2 /l 2), which admit a vector l-coloring for the SDP obtained by Ω(n) rounds. For Vertex Cover, we show an integrality gap of 1.36 for Ω(n δ) rounds, for a small constant δ. The results for CSPs provide the first examples of Ω(n) round integrality gaps matching hardness results known only under the Unique Games Conjecture. This and some additional properties of the integrality gap instance, allow for gaps for in case of Independent Set and Chromatic Number which are stronger than the NP-hardness results known even under the Unique Games Conjecture.
A theorem of Green, Tao, and Ziegler can be stated (roughly) as follows: if R is a pseudorandom set, and D is a dense subset of R, then D may be modeled by a set M that is dense in the entire domain such that D and M are indistinguishable. (The precise statement refers to"measures" or distributions rather than sets.) The proof of this theorem is very general, and it applies to notions of pseudorandomness and indistinguishability defined in terms of any family of distinguishers with some mild closure properties. The proof proceeds via iterative partitioning and an energy increment argument, in the spirit of the proof of the weak Szemerédi regularity lemma. The "reduction" involved in the proof has exponential complexity in the distinguishing probability.We present a new proof inspired by Nisan's proof of Impagliazzo's hardcore set theorem. The reduction in our proof has polynomial complexity in the distinguishing probability and provides a new characterization of the notion of "pseudoentropy" of a distribution.We also follow the connection between the two theorems and obtain a new proof of Impagliazzo's hardcore set theorem via iterative partitioning and energy increment. While our reduction has exponential complexity in some parameters, it has the advantage that the hardcore set is efficiently recognizable.
We show that every high-entropy distribution is indistinguishable from an efficiently samplable distribution of the same entropy. Specifically, we prove that if D is a distribution over {0, 1}n of min-entropy at least n − k, then for every S and there is a circuit C of size at most S · poly( −1 , 2 k ) that samples a distribution of entropy at least n − k that is -indistinguishable from D by circuits of size S.Stated in a more abstract form (where we refer to indistinguishability by arbitrary families of distinguishers rather than bounded-size circuits), our result implies (a) the Weak Szemerédi Regularity Lemma of Frieze and Kannan (b) a constructive version of the Dense Model Theorem of Green, Tao and Ziegler with better quantitative parameters (polynomial rather than exponential in the distinguishing probability ), and (c) the Impagliazzo Hardcore Set Lemma. It appears to be the general result underlying the known connections between "regularity" results in graph theory, "decomposition" results in additive combinatorics, and the Hardcore Lemma in complexity theory.We present two proofs of our result, one in the spirit of Nisan's proof of the Hardcore Lemma via duality of linear programming, and one similar to Impagliazzo's "boosting" proof. A third proof by iterative partitioning, which gives the complexity of the sampler to be exponential in
We present an efficient algorithm to find a good solution to the Unique Games problem when the constraint graph is an expander.We introduce a new analysis of the standard SDP in this case that involves correlations among distant vertices. It also leads to a parallel repetition theorem for unique games when the graph is an expander.
We study linear programming relaxations of Vertex Cover and Max Cut arising from repeated applications of the "liftand-project" method of Lovasz and Schrijver starting from the standard linear programming relaxation.For Vertex Cover, Arora, Bollobas, Lovasz and Tourlakis prove that the integrality gap remains at least 2 − ε after Ωε(log n) rounds, where n is the number of vertices, and Tourlakis proves that integrality gap remains at least 1.5 − ε after Ω((log n) 2 ) rounds. Fernandez de la Vega and Kenyon prove that the integrality gap of Max Cut is at most 1 2 + ε after any constant number of rounds. (Their result also applies to the more powerful Sherali-Adams method.)We prove that the integrality gap of Vertex Cover remains at least 2−ε after Ωε(n) rounds, and that the integrality gap of Max Cut remains at most 1/2 + ε after Ωε(n) rounds.
We prove the existence of a poly(n, m)-time computable pseudorandom generator which "1/poly(n, m)-fools" DNFs with n variables and m terms, and has seed length O(log 2 nm · log log nm). Previously, the best pseudorandom generator for depth-2 circuits had seed length O(log 3 nm), and was due to Bazzi (FOCS 2007).It follows from our proof that a 1/mÕ (log mn) -biased distribution 1/poly(nm)-fools DNFs with m terms and n variables. For inverse polynomial distinguishing probability this is nearly tight because we show that for every m, δ there is a 1/m Ω(log 1/δ) -biased distribution X and a DNF φ with m terms such that φ is not δ-fooled by X.For the case of read-once DNFs, we show that seed length O(log mn · log 1/δ) suffices, which is an improvement for large δ.It also follows from our proof that a 1/m O(log 1/δ) -biased distribution δ-fools all read-once DNF with m terms. We show that this result too is nearly tight, by constructing a 1/mΩ (log 1/δ) -biased distribution that does not δ-fool a certain m-term read-once DNF.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.