Let D be a b-wise independent distribution over {0, 1} m. Let E be the "noise" distribution over {0, 1} m where the bits are independent and each bit is 1 with probability η/2. We study which tests f : {0, 1} m → [−1, 1] are ε-fooled by D + E, i.e., | E[f (D + E)] − E[f (U)]| ≤ ε where U is the uniform distribution. We show that D + E ε-fools product tests f : ({0, 1} n) k → [−1, 1] given by the product of k bounded functions on disjoint n-bit inputs with error ε = k(1 − η) Ω(b 2 /m) , where m = nk and b ≥ n. This bound is tight when b = Ω(m) and η ≥ (log k)/m. For b ≥ m 2/3 log m and any constant η the distribution D + E also 0.1-fools log-space algorithms. We develop two applications of this type of results. First, we prove communication lower bounds for decoding noisy codewords of length m split among k parties. For Reed-Solomon codes of dimension m/k where k = O(1), communication Ω(ηm) − O(log m) is required to decode one message symbol from a codeword with ηm errors, and communication O(ηm log m) suffices. Second, we obtain pseudorandom generators. We can ε-fool product tests f : ({0, 1} n) k → [−1, 1] under any permutation of the bits with seed lengths 2n +Õ(k 2 log(1/ε)) and O(n) + O(nk log 1/ε). Previous generators have seed lengths ≥ nk/2 or ≥ n √ nk. For the special case where the k bounded functions have range {0, 1} the previous generators have seed length ≥ (n + log k) log(1/ε).
Abstract. We show that public-key bit encryption schemes which support weak (i.e., compact) homomorphic evaluation of any sufficiently "sensitive" collection of functions cannot be proved message indistinguishable beyond AM ∩ coAM via general (adaptive) reductions, and beyond statistical zero-knowledge via reductions of constant query complexity. Examples of sensitive collections include parities, majorities, and the class consisting of all AND and OR functions.We also give a method for converting a strong (i.e., distributionpreserving) homomorphic evaluator for essentially any boolean function (except the trivial ones, the NOT function, and the AND and OR functions) into a rerandomization algorithm: This is a procedure that converts a ciphertext into another ciphertext which is statistically close to being independent and identically distributed with the original one. Our transformation preserves negligible statistical error.
case versions of the problem. The heart of our work is a new poly(n)-time procedure for reconstructing the multiset of all O(log n)-length subwords of any source string x ∈ {0, 1} n given access to traces of x.
Let X m,ε be the distribution over m bits X 1 ,…, X m where the X i are independent and each X i equals 1 with probability (1− ε )/2 and 0 with probability (1 − ε )/2. We consider the smallest value ε * of ε such that the distributions X m, ε and X m, 0 can be distinguished with constant advantage by a function f : {0,1} m → S , which is the product of k functions f 1 , f 2 ,…, f k on disjoint inputs of n bits, where each f i : {0,1} n → S and m = nk . We prove that ε * = Θ(1/√ n log k ) if S = [−1,1], while ε * = Θ(1/√ nk ) if S is the set of unit-norm complex numbers.
In the trace reconstruction problem, an unknown source string x ∈ {0, 1} n is sent through a probabilistic deletion channel which independently deletes each bit with probability δ and concatenates the surviving bits, yielding a trace of x. The problem is to reconstruct x given independent traces. This problem has received much attention in recent years both in the worst-case setting where x may be an arbitrary string in {0, 1} n [DOS17, NP17, HHP18, HL18, Cha19] and in the average-case setting where x is drawn uniformly at random from {0, 1} n [PZ17, HPP18, HL18, Cha19].This paper studies trace reconstruction in the smoothed analysis setting, in which a "worstcase" string x worst is chosen arbitrarily from {0, 1} n , and then a perturbed version x of x worst is formed by independently replacing each coordinate by a uniform random bit with probability σ. The problem is to reconstruct x given independent traces from it.Our main result is an algorithm which, for any constant perturbation rate 0 < σ < 1 and any constant deletion rate 0 < δ < 1, uses poly(n) running time and traces and succeeds with high probability in reconstructing the string x. This stands in contrast with the worst-case version of the problem, for which exp(O(n 1/3 )) is the best known time and sample complexity [DOS17, NP17].Our approach is based on reconstructing x from the multiset of its short subwords and is quite different from previous algorithms for either the worst-case or average-case versions of the problem. The heart of our work is a new poly(n)-time procedure for reconstructing the multiset of all O(log n)-length subwords of any source string x ∈ {0, 1} n given access to traces of x.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.