Proceedings of the 2006 ACM SIGMOD International Conference on Management of Data 2006
DOI: 10.1145/1142473.1142500
|View full text |Cite
|
Sign up to set email alerts
|

Personalized privacy preservation

Abstract: We study generalization for preserving privacy in publication of sensitive data. The existing methods focus on a universal approach that exerts the same amount of preservation for all persons, without catering for their concrete needs. The consequence is that we may be offering insufficient protection to a subset of people, while applying excessive privacy control to another subset.Motivated by this, we present a new generalization framework based on the concept of personalized anonymity. Our technique perform… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
293
0
4

Year Published

2006
2006
2024
2024

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 553 publications
(311 citation statements)
references
References 25 publications
0
293
0
4
Order By: Relevance
“…This solution is based on the intuition that each private data is associated to a given sensitivity level, which depends on the precision of the data itself; generally, the lesser the data is precise, the lesser it is sensitive. Obfuscation techniques have been applied to the protection of microdata released from databases (e.g., [35]). …”
Section: Obfuscation Of Context Datamentioning
confidence: 99%
“…This solution is based on the intuition that each private data is associated to a given sensitivity level, which depends on the precision of the data itself; generally, the lesser the data is precise, the lesser it is sensitive. Obfuscation techniques have been applied to the protection of microdata released from databases (e.g., [35]). …”
Section: Obfuscation Of Context Datamentioning
confidence: 99%
“…Clearly, ignoring it may lead to offering insufficient protection to a subset of people while applying excessive protection to the privacy of another subset. It is worth pointing out that we do not draw a distinction between sensitive attributes and quasi-identifiers [13], [24], [12]. Rather, our framework provides more flexibility by enabling the owners of the data to supply the sensitivity of their attributes at their discretion.…”
Section: Privacy Risk Frameworkmentioning
confidence: 99%
“…Maximizing the utility u is analogous to minimizing the information loss △u and, therefore, it is straightforward to transfer the optimization problem from one of these utility measures to the other. Xiai and Tao [17] defined the information loss as follows: △u(a) = k i=1 (n i − 1)/m i , where m i and n i are defined as above. Likewise, Iyengar [4] proposes the LM loss metric which is based on summing up normalized information losses for each attribute i.e.…”
Section: A Utility Assessment Modelsmentioning
confidence: 99%
“…A personalized generalization technique is proposed by Xiao and Tao [17]. Under such approach users define maximum allowable specialization levels for their different attributes.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation