The existing work has conducted in-depth research and analysis on global differential privacy (GDP) and local differential privacy (LDP) based on information theory. However, the data privacy preserving community does not systematically review and analyze GDP and LDP based on the information-theoretic channel model. To this end, we systematically reviewed GDP and LDP from the perspective of the information-theoretic channel in this survey. First, we presented the privacy threat model under information-theoretic channel. Second, we described and compared the information-theoretic channel models of GDP and LDP. Third, we summarized and analyzed definitions, privacy-utility metrics, properties, and mechanisms of GDP and LDP under their channel models. Finally, we discussed the open problems of GDP and LDP based on different types of information-theoretic channel models according to the above systematic review. Our main contribution provides a systematic survey of channel models, definitions, privacy-utility metrics, properties, and mechanisms for GDP and LDP from the perspective of information-theoretic channel and surveys the differential privacy synthetic data generation application using generative adversarial network and federated learning, respectively. Our work is helpful for systematically understanding the privacy threat model, definitions, privacy-utility metrics, properties, and mechanisms of GDP and LDP from the perspective of information-theoretic channel and promotes in-depth research and analysis of GDP and LDP based on different types of information-theoretic channel models.
Federated learning is a widely used distributed learning approach in recent years, however, despite model training from collecting data become to gathering parameters, privacy violations may occur when publishing and sharing models. A dynamic approach is proposed to add Gaussian noise more effectively and apply differential privacy to federal deep learning. Concretely, it is abandoning the traditional way of equally distributing the privacy budget and adjusting the privacy budget to accommodate gradient descent federation learning dynamically, where the parameters depend on computation derived to avoid the impact on the algorithm that hyperparameters are created manually. It also incorporates adaptive threshold cropping to control the sensitivity, and finally, moments accountant is used to counting the consumed on the privacy‐preserving, and learning is stopped only if the by clients setting is reached, this allows the privacy budget to be adequately explored for model training. The experimental results on real datasets show that the method training has almost the same effect as the model learning of non‐privacy, which is significantly better than the differential privacy method used by TensorFlow.
Collecting and analyzing data can generate a wealth of knowledge, but it can also raise privacy concerns. Local differential privacy (LDP) is the latest privacy standard to address this issue and has been implemented on platforms such as Chrome, iOS, and macOS. In the LDP solution, users first perturb their own data on the user side and then upload the perturbed data to the server. This not only protects against background attacks but also against untrusted servers. However, existing multidimensional solutions ignore the personalized privacy needs of users. In this paper, we meet the personalized privacy needs of users while reducing the mean square error of the perturbed data. Specifically, we first designed a personalized privacy budget allocation within a certain range, which meets the personalized privacy needs of users. Then, we optimized the sampling dimension of the existing solution, which resulted in a smaller mean square error of the perturbed data. Finally, we proposed our solution for collecting multidimensional numerical data and estimating the mean. In addition, we conducted experiments on two real datasets. The results demonstrate that the mean square error of our solution is lower than the existing solutions.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.