Federated learning (FL) is a machine learning setting where many clients (e.g. mobile devices or whole organizations) collaboratively train a model under the orchestration of a central server (e.g. service provider), while keeping the training data decentralized. FL embodies the principles of focused data collection and minimization, and can mitigate many of the systemic privacy risks and costs resulting from traditional, centralized machine learning and data science approaches. Motivated by the explosive growth in FL research, this paper discusses recent advances and presents an extensive collection of open problems and challenges.
We present the first sample-optimal sublinear time algorithms for the sparse Discrete Fourier Transform over a two-dimensional √ n × √ n grid. Our algorithms are analyzed for average case signals. For signals whose spectrum is exactly sparse, our algorithms use O(k) samples and run in O(k log k) time, where k is the expected sparsity of the signal. For signals whose spectrum is approximately sparse, our algorithm uses O(k log n) samples and runs in O(k log 2 n) time; the latter algorithm works for k = Θ( √ n). The number of samples used by our algorithms matches the known lower bounds for the respective signal models.By a known reduction, our algorithms give similar results for the one-dimensional sparse Discrete Fourier Transform when n is a power of a small composite number (e.g., n = 6 t ).
We present decidability results for a sub-class of "non-interactive" simulation problems, a well-studied class of problems in information theory. A non-interactive simulation problem is specified by two distributions P (x, y) and Q(u, v): The goal is to determine if two players, Alice and Bob, that observe sequences X n and Y n respectively where {(
Consider the setup where
n
parties are each given an element
in the finite field
and the goal is to compute the sum
in a secure fashion and with as little communication as possible. We study this problem in the
anonymized model
of Ishai et al. (FOCS 2006) where each party may broadcast anonymous messages on an insecure channel.
We present a new analysis of the one-round “split and mix” protocol of Ishai et al. In order to achieve the same security parameter, our analysis reduces the required number of messages by a
multiplicative factor.
We also prove lower bounds showing that the dependence of the number of messages on the domain size, the number of parties, and the security parameter is essentially tight.
Using a reduction of Balle et al. (2019), our improved analysis of the protocol of Ishai et al. yields, in the same model, an
-differentially private protocol for aggregation that, for any constant
and any
, incurs only a constant error and requires only a
constant number of messages
per party. Previously, such a protocol was known only for
messages per party.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.