Proceedings of the 2008 ACM SIGMOD International Conference on Management of Data 2008
DOI: 10.1145/1376616.1376666
|View full text |Cite
|
Sign up to set email alerts
|

Preservation of proximity privacy in publishing numerical sensitive data

Abstract: We identify proximity breach as a privacy threat specific to numerical sensitive attributes in anonymized data publication. Such breach occurs when an adversary concludes with high confidence that the sensitive value of a victim individual must fall in a short interval -even though the adversary may have low confidence about the victim's actual value.None of the existing anonymization principles (e.g., kanonymity, l-diversity, etc.) can effectively prevent proximity breach. We remedy the problem by introducing… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
65
0
1

Year Published

2009
2009
2024
2024

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 90 publications
(67 citation statements)
references
References 43 publications
0
65
0
1
Order By: Relevance
“…In some contexts, the basic principle of k-Anonymity is not sufficient to protect data privacy, for example in a group of data with little diversity or high similarity. Therefore, a number of proposals designed new privacy principles to enhance the privacy of k-Anonymity [12,13,5,6,7]. However, they are not easy to apply in the private retrieval of public data applications.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…In some contexts, the basic principle of k-Anonymity is not sufficient to protect data privacy, for example in a group of data with little diversity or high similarity. Therefore, a number of proposals designed new privacy principles to enhance the privacy of k-Anonymity [12,13,5,6,7]. However, they are not easy to apply in the private retrieval of public data applications.…”
Section: Related Workmentioning
confidence: 99%
“…As pointed out in [5,6,7], there should be enough difference between the data items in an anonymized range (in a bounding box in the case of bbPIR) under a privacy breach probability P brh , which we call neighborhood difference, otherwise the private data can be determined in a narrow range with probability P brh . Instead of using the non-numeric Adult data set, we generated a synthetic data set with 10 6 numeric data keys and values, which follow a Zipf distribution and are in the range of [0.0, 1.0].…”
Section: Proximity Privacy Of Numeric Datamentioning
confidence: 99%
See 1 more Smart Citation
“…In the setting of central publication, a publisher intends to release an anonymized version T * of the microdata table T , such that no malicious user, called an attacker, can infer the sensitive information regarding any individual from T * , whereas the statistical utility of T is still preserved in T * . Towards this end, a bulk of work has been done on anonymized data publication [1], [3], [7], [9], [10], [11], [12], [13], [14], [15], [16]. One of the major aims is to address association attack: the attacker possesses the exact non-sensitive (quasi-identifier (QI)) values of the victim, and attempts to discover his/her sensitive (SA) value from the published table T * .…”
Section: Introductionmentioning
confidence: 99%
“…Existing privacy principles and definitions (e.g., [9], [10], [11], [12], [13], [15], [16]), however, are incapable of capturing this general form of breach because of their assumptions regarding the underlying data models. Recently, ( , δ) kdissimilarity [14], a data-model-independent privacy principle, has been proposed as an effective countermeasure against general proximity breach.…”
Section: Introductionmentioning
confidence: 99%