Currently, increasingly ubiquitous location-based services are facilitating the activities of people in daily life. However, releasing real locations could lead to serious concerns about privacy. To remedy these issues, a number of location privacy protection mechanisms (LPPMs) have been proposed, e.g., spatial cloaking, dummy location generation, query caching, and perturbation. However, these LPPMs are vulnerable to inference attacks because of the incompleteness of the captured privacy risks caused by heterogeneous correlations in location data, e.g., semantical, temporal, and social correlations. Consequently, they cannot provide sufficient privacy guarantees due to the absence of embedded heterogeneous correlations in the design process of LPPM. To address these issues, we present QUAD, a framework for quantifying location privacy risks under heterogeneous correlations. QUAD has three features: (i) it enables the modeling and seamless fusion of multiple kinds of correlations that are available to adversaries; (ii) it provides a probabilistic representation of the privacy risks faced under heterogeneous correlations; and (iii) it achieves the quantification of privacy risks for multiple kinds of LPPMs that are widely used in the literature. To mitigate privacy threats, we propose a defense mechanism embedded with the quantified privacy risks. Extensive experiments on two real-world datasets confirm that QUAD can capture more privacy risks than competitors, and the risks can be dramatically reduced by our defense mechanism.
Many data analysis applications rely on social networks that contain abundant information about individuals. Nevertheless, these applications can leak private information about individuals in social networks. To protect the privacy of individuals in social networks, several approaches involving graph generation models or differential privacy were proposed for publishing a social network in place of the original graph for data analysis applications. However, these techniques can cause a serious loss of data utility, especially regarding the real social links. In this paper, we propose an approach of degree-differential privacy graph generation with field theory. The approach includes two steps for publishing a social network. The degrees of the nodes are first perturbed with the differential privacy by adding noise that follows a Laplacian distribution. Then, the edges of the social network are synthesized with field theory. We propose a field theory model for social networks by simulating the law of gravity in physics and establish the correspondence of the gravitational field in physics to the field theory model. When an edge is formed, the starting node is preferentially chosen with high probability from the nodes with high degrees, and then the ending node is selected with high probability when the interaction force between the starting node and the ending node is large. Extensive experiments over four datasets show that our approach can preserve more real social ties compared with previous approaches and will not incur a loss of structure features over the datasets, such as the degree distribution and clustering coefficients. INDEX TERMS Differential privacy, graph generation model, privacy-preserving data publication, social networks.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.