We explore the power of interactive proofs with a distributed verifier. In this setting, the verifier consists of n nodes and a graph G that defines their communication pattern. The prover is a single entity that communicates with all nodes by short messages. The goal is to verify that the graph G belongs to some language in a small number of rounds, and with small communication bound, i.e., the proof size.This interactive model was introduced by Kol, Oshman and Saxena (PODC 2018) as a generalization of non-interactive distributed proofs. They demonstrated the power of interaction in this setting by constructing protocols for problems as Graph Symmetry and Graph Non-Isomorphism -both of which require proofs of Ω(n 2 )-bits without interaction.In this work, we provide a new general framework for distributed interactive proofs that allows one to translate standard interactive protocols (i.e., with a centralized verifier) to ones where the verifier is distributed with a proof size that depends on the computational complexity of the verification algorithm run by the centralized verifier. We show the following:• Every (centralized) computation that can be performed in time O(n) can be translated into three-round distributed interactive protocol with O(log n) proof size. This implies that many graph problems for sparse graphs have succinct proofs (e.g., testing planarity).• Every (centralized) computation implemented by either a small space or by uniform NC circuit can be translated into a distributed protocol with O(1) rounds and O(log n) bits proof size for the low space case and polylog(n) many rounds and proof size for NC.• We also demonstrate the power of our compilers for problems not captured by the above families. We show that for Graph Non-Isomorphism, one of the striking demonstrations of the power of interaction, there is a 4-round protocol with O(log n) proof size, improving upon the O(n log n) proof size of Kol et al.• For many problems we show how to reduce proof size below the naturally seeming barrier of log n. By employing our RAM compiler, we get a 5-round protocols with proof size O(log log n) for a family of problems including Fixed Automorphism, Clique and Leader Election (for the later two problems we actually get O(1) proof size).• Finally we discuss how to make these proofs non-interactive arguments via random oracles.Our compilers capture many natural problems and demonstrates the difficultly in showing lower bounds in these regimes.with the nodes of the network in rounds. In each round, a node u sends the prover a random challenge R u . Then, the prover responds by sending each node u its respond Y u . Nodes can exchange their proof Y u only with their immediate neighbors N(u) in the network in order to decide whether to accept the proof. For accepting a proof all nodes must accept and to reject it is enough that one node rejects. A simple example for a "distributed NP" proof is 3-coloring of a graph: the prover gives each node in the graph its color, and nodes exchange colors with their neigh...
We investigate the adversarial robustness of streaming algorithms. In this context, an algorithm is considered robust if its performance guarantees hold even if the stream is chosen adaptively by an adversary that observes the outputs of the algorithm along the stream and can react in an online manner. While deterministic streaming algorithms are inherently robust, many central problems in the streaming literature do not admit sublinear-space deterministic algorithms; on the other hand, classical space-efficient randomized algorithms for these problems are generally not adversarially robust. This raises the natural question of whether there exist efficient adversarially robust (randomized) streaming algorithms for these problems.
Many efficient data structures use randomness, allowing them to improve upon deterministic ones. Usually, their efficiency and/or correctness are analyzed using probabilistic tools under the assumption that the inputs and queries are independent of the internal randomness of the data structure. In this work, we consider data structures in a more robust model, which we call the adversarial model. Roughly speaking, this model allows an adversary to choose inputs and queries adaptively according to previous responses. Specifically, we consider a data structure known as "Bloom filter" and prove a tight connection between Bloom filters in this model and cryptography.A Bloom filter represents a set S of elements approximately, by using fewer bits than a precise representation. The price for succinctness is allowing some errors: for any x ∈ S it should always answer 'Yes', and for any x / ∈ S it should answer 'Yes' only with small probability. In the adversarial model, we consider both efficient adversaries (that run in polynomial time) and computationally unbounded adversaries that are only bounded in the amount of queries they can make. For computationally bounded adversaries, we show that non-trivial (memory-wise) Bloom filters exist if and only if one-way functions exist. For unbounded adversaries we show that there exists a Bloom filter for sets of size n and error ε, that is secure against t queries and uses only O(n log 1 ε + t) bits of memory. In comparison, n log 1 ε is the best possible under a non-adaptive adversary.
Many efficient data structures use randomness, allowing them to improve upon deterministic ones. Usually, their efficiency and correctness are analyzed using probabilistic tools under the assumption that the inputs and queries are independent of the internal randomness of the data structure. In this work, we consider data structures in a more robust model, which we call the adversarial model. Roughly speaking, this model allows an adversary to choose inputs and queries adaptively according to previous responses. Specifically, we consider a data structure known as "Bloom filter" and prove a tight connection between Bloom filters in this model and cryptography.A Bloom filter represents a set S of elements approximately, by using fewer bits than a precise representation. The price for succinctness is allowing some errors: for any x ∈ S it should always answer 'Yes', and for any x / ∈ S it should answer 'Yes' only with small probability. In the adversarial model, we consider both efficient adversaries (that run in polynomial time) and computationally unbounded adversaries that are only bounded in the number of queries they can make. For computationally bounded adversaries, we show that non-trivial (memory-wise) Bloom filters exist if and only if one-way functions exist. For unbounded adversaries we show that there exists a Bloom filter for sets of size n and error ε, that is secure against t queries and uses only O(n log 1 ε + t) bits of memory. In comparison, n log 1 ε is the best possible under a non-adaptive adversary.Theorem 4.8. Let B be an (n, ε)-Bloom filter using m bits of memory. If pseudorandom permutations exist, then there exists a negligible function neg(·) such that for security parameter λ there exists an (n, ε + neg(λ))-strongly resilient Bloom filter that uses m = m + λ bits of memory.Proof. The main idea is to randomize the adversary's queries by applying a pseudorandom permutation (see Definition A.6) on them; then we may consider the queries as random and not as chosen adaptively by the adversary.Let B be an (n, ε)-Bloom filter using m bits of memory. We will construct a (n, ε + neg(λ))strongly resilient Bloom filter B as follows: To initialize B on a set S we first choose a key K ∈ {0, 1} λ for a pseudo-random permutation, PRP, over {0, 1} log u . LetThen we initialize B with S . For the query algorithm, on input x we output B(PRP K (x)). Notice that the only additional memory we need is storing the key K of the PRP which takes λ bits. Moreover, the running time of the query algorithm of B is one pseudo-random permutation more than the query time of B.
A cycle cover of a bridgeless graph G is a collection of simple cycles in G such that each edge e appears on at least one cycle. The common objective in cycle cover computation is to minimize the total lengths of all cycles. Motivated by applications to distributed computation, we introduce the notion of low-congestion cycle covers, in which all cycles in the cycle collection are both short and nearly edge-disjoint. Formally, a (d, c)-cycle cover of a graph G is a collection of cycles in G in which each cycle is of length at most d and each edge participates in at least one cycle and at most c cycles.A-priori, it is not clear that cycle covers that enjoy both a small overlap and a short cycle length even exist, nor if it is possible to efficiently find them. Perhaps quite surprisingly, we prove the following: Every bridgeless graph of diameter D admits a (d, c)-cycle cover where d =Õ(D) and c =Õ(1). That is, the edges of G can be covered by cycles such that each cycle is of length at most O(D) and each edge participates in at most O(1) cycles. These parameters are existentially tight up to polylogarithmic terms.Furthermore, we show how to extend our result to achieve universally optimal cycle covers. Let C e is the length of the shortest cycle that covers e, and let OPT(G) = max e∈G C e . We show that every bridgeless graph admits a (d, c)-cycle cover where d =Õ(OPT(G)) and c =Õ(1).We demonstrate the usefulness of low congestion cycle covers in different settings of resilient computation. For instance, we consider a Byzantine fault model where in each round, the adversary chooses a single message and corrupt in an arbitrarily manner. We provide a compiler that turns any r-round distributed algorithm for a graph G with diameter D, into an equivalent fault tolerant algorithm with r · poly(D) rounds.Theorem 2 (Optimal Cycle Cover, Informal). There exists a construction of (nearly) universally optimal (d, c)-cycle covers with d = O(OPT(G)) and c = O(1), where OPT(G) is the best possible cycle length (i.e., even without the congestion constraint).In fact, our algorithm can be made nearly optimal with respect to each individual edge. That is, we can construct a cycle cover that covers each edge e by a cycle whose length is O(|C e |) where C e is the shortest cycle in G that goes through e. The congestion for any edge remains O(1).Turning to the distributed setting, we also provide a construction of cycle covers for the family of minor-closed graphs. Our construction is (nearly) optimal in terms of both its run-time and in the parameters of the cycle cover. Minor-closed graphs have recently attracted a lot of attention in the setting of distributed network optimization [GH16, HIZ16a, GP17, HLZ18, LMR18].Theorem 3 (Optimal Cycle Cover Construction for Minor Close Graphs, Informal). For the family of minor closed graphs, there exists an O(OPT(G))-round algorithm that constructs (d, c)-cycle cover with d = O(OPT(G)), c = O(1), where OPT(G) is equal to the best possible cycle length (i.e., even without the constraint on the...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.