We consider error-correcting codes where a bit of the message can be probabilistically recovered by looking at a limited number of bits (or blocks of bits) of a (possibly) corrupted encoding. Such codes can be derived from multivariate polynomial encodings, and have several applications in complexity theory, such as worst-case to average-case reductions, probabilistically checkable proofs, and private information retrieval. Such codes could have practical applications if they had at the same time constant information rate, the ability to correct a linear number of errors, and very efficient (ideally, constant-time) reconstruction procedures. In particular they would give fault-tolerant data storage with unlimited scalability. We show a negative result on the existence of such codes; namely, that linear encoding length is incompatible with a decoding procedure making a constant number of queries (which is necessary if one is to have constant reconstruction time). In particular, if a bit of a message of length n can be retrieved by looking at q blocks of length l, and the reconstruction procedure is robust to a fraction 5 of errors, then the encoding is made of m = f/(poly(1/q, 6, e)(n/l) q/(q-t)) blocks of length I. This is the first lower bound for this class of codes. Our bound is far from the known (exponential) upper bound when q is a constant. Closing this gap remains a challenge.
We introduce a new approach to constructing extractors. Extractors are algorithms that transform a "weakly random" distribution into an almost uniform distribution. Explicit constructions of extractors have a variety of important applications, and tend to be very difficult to obtain.We demonstrate an unsuspected connection between extractors and pseudorandom generators. In fact, we show that every pseudorandom generator of a certain kind is an extractor.A pseudorandom generator construction due to Impagliazzo and Wigderson, once reinterpreted via our connection, is already an extractor that beats most known constructions and solves an important open question. We also show that, using the simpler Nisan-Wigderson generator and standard errorcorrecting codes, one can build even better extractors with the additional advantage that both the construction and the analysis are simple and admit a short self-contained description.
A basic fact in spectral graph theory is that the number of connected components in an undirected graph is equal to the multiplicity of the eigenvalue zero in the Laplacian matrix of the graph. In particular, the graph is disconnected if and only if there are at least two eigenvalues equal to zero. Cheeger's inequality and its variants provide an approximate version of the latter fact; they state that a graph has a sparse cut if and only if there are at least two eigenvalues that are close to zero.It has been conjectured that an analogous characterization holds for higher multiplicities: There are k eigenvalues close to zero if and only if the vertex set can be partitioned into k subsets, each defining a sparse cut. We resolve this conjecture positively. Our result provides a theoretical justification for clustering algorithms that use the bottom k eigenvectors to embed the vertices into R k , and then apply geometric considerations to the embedding.We also show that these techniques yield a nearly optimal quantitative connection between the expansion of sets of size ≈ n/k and λ k , the kth smallest eigenvalue of the normalized Laplacian, where n is the number of vertices. In particular, we show that in every graph there are at least k/2 disjoint sets (one of which will have size at most 2n/k), each having expansion at most O( √ λ k log k). Louis, Raghavendra, Tetali, and Vempala have independently proved a slightly weaker version of this last result. The √ log k bound is tight, up to constant factors, for the "noisy hypercube" graphs.
have recently shown that if there exists a decision problem solvable in time 2 O(n) and having circuit complexity 2 0(n) (for all but finitely many n) then P=BPP. This result is a culmination of a series of works showing connections between the existence of hard predicates and the existence of good pseudorandom generators. The construction of Impagliazzo and Wigderson goes through three phases of``hardness amplification'' (a multivariate polynomial encoding, a first derandomized XOR Lemma, and a second derandomized XOR Lemma) that are composed with a pseudorandom generator construction of N. Nisan and A. 149 167). In this paper we present two different approaches to proving the main result of Impagliazzo and Wigderson. In developing each approach, we introduce new techniques and prove new results that could be useful in future improvements andÂor applications of hardness-randomness trade-offs. Our first result is that when (a modified version of) the Nisan Wigderson generator construction is applied with a``mildly'' hard predicate, the result is a generator that produces a distribution indistinguishable from having large min-entropy. An extractor can then be used to produce a distribution computationally indistinguishable from uniform. This is the first construction of a pseudorandom generator that works with a mildly hard predicate without doing hardness amplification. We then show that in the Impagliazzo Wigderson construction only the first hardness-amplification phase (encoding with multivariate polynomial) is necessary, since it already gives the required average-case hardness. We prove this result by (i) establishing a connection between the hardness-amplification problem and a list-decoding problem for error-correcting codes; and (ii) presenting a list-decoding algorithm for errorcorrecting codes based on multivariate polynomials that improves and simplifies a previous one by S.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.