Proceedings of the 2009 ACM SIGMOD International Conference on Management of Data 2009
DOI: 10.1145/1559845.1559861
|View full text |Cite
|
Sign up to set email alerts
|

Attacks on privacy and deFinetti's theorem

Abstract: In this paper we present a method for reasoning about privacy using the concepts of exchangeability and deFinetti's theorem. We illustrate the usefulness of this technique by using it to attack a popular data sanitization scheme known as Anatomy. We stress that Anatomy is not the only sanitization scheme that is vulnerable to this attack. In fact, any scheme that uses the random worlds model, i.i.d. model, or tuple-independent model needs to be re-evaluated.The difference between the attack presented here and … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
128
0
2

Year Published

2009
2009
2018
2018

Publication Types

Select...
6
2
1

Relationship

1
8

Authors

Journals

citations
Cited by 136 publications
(131 citation statements)
references
References 39 publications
1
128
0
2
Order By: Relevance
“…Some spectacular privacy breaches (such as demonstrations involving AOL [3] and GIC [39] data) have occurred when such intuition was not followed by a thorough analysis. In other cases, subtle implicit assumptions created weaknesses that could be exploited to breach privacy [25,40,22,26]. Similarly, the choice of a privacy mechanism based on some intuitively plausible measures of utility can result in a dataset that is not as useful as it could be [31,23].…”
Section: Introductionmentioning
confidence: 99%
“…Some spectacular privacy breaches (such as demonstrations involving AOL [3] and GIC [39] data) have occurred when such intuition was not followed by a thorough analysis. In other cases, subtle implicit assumptions created weaknesses that could be exploited to breach privacy [25,40,22,26]. Similarly, the choice of a privacy mechanism based on some intuitively plausible measures of utility can result in a dataset that is not as useful as it could be [31,23].…”
Section: Introductionmentioning
confidence: 99%
“…However, many of them have been shown to be insufficient due to realistic attacks on such schemes (e.g., see [3]). The notion of differential privacy [4,5], however, has remained strong and resilient to these attacks.…”
Section: Introductionmentioning
confidence: 99%
“…By fixing x and enumerating yi in DOM , the left hand side of Equation 16 can be regarded as a distribution fx(yi) of Yi. Given the theoretical values on the right hand side of Equation 16, the error of fx(yi) can again be quantified by the L2 norm: Failure of independent perturbation.…”
Section: Methodsmentioning
confidence: 99%
“…There have been a plethora of studies on partition-based approaches, such as generalization [28], anatomy [33,35], condensation [1]. Despite the popularity of these approaches, they are vulnerable to various types of privacy attacks, as pointed out in [16,26,32].…”
Section: Related Workmentioning
confidence: 99%