Locally correctable codes (LCCs) and locally testable codes (LTCs) are error-correcting codes that admit local algorithms for correction and detection of errors. Those algorithms are local in the sense that they only query a small number of entries of the corrupted codeword. The fundamental question about LCCs and LTCs is to determine the optimal tradeoff among their rate, distance, and query complexity. In this work, we construct the first LCCs and LTCs with constant rate, constant relative distance, and sub-polynomial query complexity. Specifically, we show that there exist LCCs and LTCs with block length n , constant rate (which can even be taken arbitrarily close to 1), and constant relative distance, whose query complexity is exp(Õ(√log n )) (for LCCs) and (log n ) O (log log n ) (for LTCs). In addition to having small query complexity, our codes also achieve better tradeoffs between the rate and the relative distance than were previously known to be achievable by LCCs or LTCs. Specifically, over large (but constant size) alphabet, our codes approach the Singleton bound, that is, they have almost the best-possible relationship between their rate and distance. Over the binary alphabet, our codes meet the Zyablov bound. Such tradeoffs between the rate and the relative distance were previously not known for any o ( n ) query complexity. Our results on LCCs also immediately give locally decodable codes with the same parameters.
An error correcting code is said to be locally testable if there is a test that checks whether a given string is a codeword, or rather far from the code, by reading only a constant number of symbols of the string. While the best known construction of LTCs by Ben-Sasson and Sudan (STOC 2005) and Dinur (J. ACM 54(3)) achieves very ecient parameters, it relies heavily on algebraic tools and on PCP machinery. In this work we present a new and arguably simpler construction of LTCs that is purely combinatorial, does not rely on PCP machinery and matches the parameters of the best known construction. However, unlike the latter construction, our construction is not entirely explicit.
In this work, we construct the first locally-correctable codes (LCCs), and locally-testable codes (LTCs) with constant rate, constant relative distance, and sub-polynomial query complexity. Specifically, we show that there exist binary LCCs and LTCs with block length n, constant rate (which can even be taken arbitrarily close to 1), constant relative distance, and query complexity exp(Õ( √ log n)). Previously such codes were known to exist only with Ω(n β ) query complexity (for constant β > 0), and there were several, quite different, constructions known.Our codes are based on a general distance-amplification method of Alon and Luby [AL96]. We show that this method interacts well with local correctors and testers, and obtain our main results by applying it to suitably constructed LCCs and LTCs in the non-standard regime of sub-constant relative distance.Along the way, we also construct LCCs and LTCs over large alphabets, with the same query complexity exp(Õ( √ log n)), which additionally have the property of approaching the Singleton bound: they have almost the best-possible relationship between their rate and distance. This has the surprising consequence that asking for a large alphabet error-correcting code to further be an LCC or LTC with exp(Õ( √ log n)) query complexity does not require any sacrifice in terms of rate and distance! Such a result was previously not known for any o(n) query complexity.Our results on LCCs also immediately give locally-decodable codes (LDCs) with the same parameters. * A preliminary version of this work appeared as [Mei14].
An error correcting code is said to be locally testable if there is a test that checks whether a given string is a codeword, or rather far from the code, by reading only a constant number of symbols of the string. While the best known construction of LTCs by Ben-Sasson and Sudan (STOC 2005) and Dinur (J. ACM 54(3)) achieves very ecient parameters, it relies heavily on algebraic tools and on PCP machinery. In this work we present a new and arguably simpler construction of LTCs that is purely combinatorial, does not rely on PCP machinery and matches the parameters of the best known construction. However, unlike the latter construction, our construction is not entirely explicit.
The IP Theorem, which asserts that IP = PSPACE [Lund et al.,[878][879][880], is one of the major achievements of complexity theory. The known proofs of the theorem are based on the arithmetization technique, which transforms a quantified Boolean formula into a related polynomial. The intuition that underlies the use of polynomials is commonly explained by the fact that polynomials constitute good errorcorrecting codes. However, the known proofs seem tailored to the use of polynomials and do not generalize to arbitrary error-correcting codes. In this work, we show that the IP theorem can be proved by using general error-correcting codes and their tensor products. We believe that this establishes a rigorous basis for the aforementioned intuition and sheds further light on the IP theorem. Introduction.The IP theorem 1 [9, 13] asserts that IP = PSPACE or, in other words, that any set in PSPACE has an interactive proof. This theorem is fundamental to our understanding of both interactive proofs and polynomial space computations. In addition, it has important applications, such as the existence of instance checkers for PSPACE-complete sets [4] and the existence of zero-knowledge proofs for every set in PSPACE [7,3]. Indeed, the theorem is one of the major achievements of complexity theory. Additional proofs of the IP theorem have been suggested explicitly by Shen [14] and implicitly by Goldwasser, Kalai, and Rothblum [5].The known proofs of the IP theorem go roughly along the following lines: Suppose that we are given a claim that can be verified in polynomial space and we are required to design an interactive protocol for verifying the claim. We begin by expressing the claim as a quantified Boolean formula, using the PSPACE-completeness of the TQBF problem. Then, we "arithmetize" the formula, transforming it into a claim about the value of a particular arithmetic expression. Finally, we use the celebrated sum-check protocol in order to verify the value of the arithmetic expression. One key point is that the sum-check protocol employs the fact that certain restrictions of the arithmetic expression are low-degree polynomials.While the arithmetization technique used in the proof turned out to be extremely useful, it seems somewhat odd that one has to use polynomials in order to prove the theorem, since the theorem itself says nothing about polynomials. The intuition behind the use of polynomials in the proof is usually explained by the fact that low-degree polynomials constitute good error-correcting codes that have additional useful properties. (In particular, they are capable of encoding computation, and their restrictions to lines also constitute good error-correcting codes.)
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.