We develop secure threshold protocols for two important operations in lattice cryptography, namely, generating a hard lattice Λ together with a "strong" trapdoor, and sampling from a discrete Gaussian distribution over a desired coset of Λ using the trapdoor. These are the central operations of many cryptographic schemes: for example, they are exactly the key-generation and signing operations (respectively) for the GPV signature scheme, and they are the public parameter generation and private key extraction operations (respectively) for the GPV IBE. We also provide a protocol for trapdoor delegation, which is used in lattice-based hierarchical IBE schemes. Our work therefore directly transfers all these systems to the threshold setting.Our protocols provide information-theoretic (i.e., statistical) security against adaptive corruptions in the UC framework, and they are private and robust against an optimal number of semi-honest or malicious parties. Our Gaussian sampling protocol is both noninteractive and efficient, assuming either a trusted setup phase (e.g., performed as part of key generation) or a sufficient amount of interactive but offline precomputation, which can be performed before the inputs to the sampling phase are known.
Mechanism design for distributed systems is fundamentally concerned with aligning individual incentives with social welfare to avoid socially inefficient outcomes that can arise from agents acting autonomously. One simple and natural approach is to centrally broadcast nonbinding advice intended to guide the system to a socially near-optimal state while still harnessing the incentives of individual agents. The analytical challenge is proving fast convergence to near optimal states, and in this article we give the first results that carefully constructed advice vectors yield stronger guarantees.We apply this approach to a broad family of potential games modeling vertex cover and set cover optimization problems in a distributed setting. This class of problems is interesting because finding exact solutions to their optimization problems is NP-hard yet highly inefficient equilibria exist, so a solution in which agents simply locally optimize is not satisfactory. We show that with an arbitrary advice vector, a set cover game quickly converges to an equilibrium with cost of the same order as the square of the social cost of the advice vector. More interestingly, we show how to efficiently construct an advice vector with a particular structure with cost O(log n) times the optimal social cost, and we prove that the system quickly converges to an equilibrium with social cost of this same order.
We study the design of differentially private algorithms for adaptive analysis of dynamically growing databases, where a database accumulates new data entries while the analysis is ongoing. We provide a collection of tools for machine learning and other types of data analysis that guarantee differential privacy and accuracy as the underlying databases grow arbitrarily large. We give both a general technique and a specific algorithm for adaptive analysis of dynamically growing databases. Our general technique is illustrated by two algorithms that schedule black box access to some algorithm that operates on a fixed database to generically transform private and accurate algorithms for static databases into private and accurate algorithms for dynamically growing databases. These results show that almost any private and accurate algorithm can be rerun at appropriate points of data growth with minimal loss of accuracy, even when data growth is unbounded. Our specific algorithm directly adapts the private multiplicative weights algorithm of [HR10] to the dynamic setting, maintaining the accuracy guarantee of the static setting through unbounded data growth. Along the way, we develop extensions of several other differentially private algorithms to the dynamic setting, which may be of independent interest for future work on the design of differentially private algorithms for growing databases.
In many real world scenarios, terms of service allow a producer of a service to collect data from its users. Producers value data but often only compensate users for their data indirectly with reduced prices for the service. This work considers how a producer (data analyst) may offer differential privacy as a premium service for its users (data subjects), where the degree of privacy offered may itself depend on the user data. Along the way, it strengthens prior negative results for privacy markets to the pay-for-privacy setting and develops a new notion of endogenous differential privacy. A positive result for endogenous privacy is given in the form of a class of mechanisms for privacy-as-a-service markets that 1) determine ɛ using the privacy and accuracy preferences of a heterogeneous body of data subjects and a single analyst, 2) collect and distribute payments for the chosen level of privacy, and 3) privately analyze the database. These mechanisms are endogenously differentially private with respect to data subjects’ privacy preferences as well as their private data, they directly elicit data subjects’ true preferences, and they determine a level of privacy that is efficient given all parties’ preferences.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.