Differential privacy is the de facto standard for protecting privacy in a variety of applications. One of the key challenges is private data release, which is particularly relevant in scenarios where limited information about the desired statistics is available beforehand. Recent work has presented a differentially private data release algorithm that achieves optimal rates of order n −1/d , with n being the size of the dataset and d being the dimension, for the worst-case error over all Lipschitz continuous statistics. This type of guarantee is desirable in many practical applications, as for instance it ensures that clusters present in the data are preserved. However, due to the "slow" rates, it is often infeasible in practice unless the dimension of the data is small. We demonstrate that these rates can be significantly improved to n −1/s when only guarantees over s-sparse Lipschitz continuous functions are required, or to n −1/(s+1) when the data lies on an unknown s-dimensional subspace, disregarding logarithmic factors. We therefore obtain practically meaningful rates for moderate constants s which motivates future work on computationally efficient approximate algorithms for this problem.
No abstract
Federated Learning (FL) is a powerful technique for training a model on a server with data from several clients in a privacy-preserving manner. In FL, a server sends the model to every client, who then train the model locally and send it back to the server. The server aggregates the updated models and repeats the process for several rounds. FL incurs significant communication costs, in particular when transmitting the updated local models from the clients back to the server. Recently proposed algorithms quantize the model parameters to efficiently compress FL communication. These algorithms typically have a quantization level that controls the compression factor. We find that dynamic adaptations of the quantization level can boost compression without sacrificing model quality. First, we introduce a time-adaptive quantization algorithm that increases the quantization level as training progresses. Second, we introduce a client-adaptive quantization algorithm that assigns each individual client the optimal quantization level at every round. Finally, we combine both algorithms into DAdaQuant, the doubly-adaptive quantization algorithm. Our experiments show that DAdaQuant consistently improves client→server compression, outperforming the strongest non-adaptive baselines by up to 2.8×.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.