2002
DOI: 10.1142/s0218488502001612
|View full text |Cite
|
Sign up to set email alerts
|

Modelling User Uncertainty for Disclosure Risk and Data Utility

Abstract: In this paper we show how a simple model that captures user uncertainty can be used to define suitable measures of disclosure risk and data utility. The model generalizes previous results of Duncan and Lambert. 1 We present several examples to illustrate how the new measures can be used to implement existing optimality criteria for the choice of the best form of data release.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
18
0

Year Published

2004
2004
2015
2015

Publication Types

Select...
5
3
1

Relationship

3
6

Authors

Journals

citations
Cited by 22 publications
(18 citation statements)
references
References 9 publications
(11 reference statements)
0
18
0
Order By: Relevance
“…Such methods can be somewhat ad hoc, however, and number of authors (e.g. Paass, 1988;Duncan and Lambert, 1989;Fuller, 1993;Trottini and Fienberg, 2002) have proposed statistical modelling frameworks which permit identification risk to be assessed following clear statistical principles. Identification may be treated as a form of statistical inference by a potential 'intruder', who is assumed to make efficient use of available information to facilitate identification through specified models.…”
Section: Introductionmentioning
confidence: 99%
“…Such methods can be somewhat ad hoc, however, and number of authors (e.g. Paass, 1988;Duncan and Lambert, 1989;Fuller, 1993;Trottini and Fienberg, 2002) have proposed statistical modelling frameworks which permit identification risk to be assessed following clear statistical principles. Identification may be treated as a form of statistical inference by a potential 'intruder', who is assumed to make efficient use of available information to facilitate identification through specified models.…”
Section: Introductionmentioning
confidence: 99%
“…Based on this framework, [36] performed an empirical analysis of SDC methods for standard tabular outputs(e.g., swapping, rounding). [42] also adopted the R-U confidentiality map as an optimal criterion for data release. Given its roots in the statistical domain, the R-U confidentiality map was initially applied to protection strategies based on perturbation (e.g., randomization), as opposed to the generalization techniques more commonly found in de-identification.…”
Section: Related Workmentioning
confidence: 99%
“…Techniques include but are not limited to: (1) sampling, (2) aggregation including variable coarsening and the use of marginal releases from contingency tables, (3) data swapping, and (4) synthetic data, e.g., in the form of multiple imputation. One of the ways statistical researchers have chosen to look at these methods is via something akin to the risk-utility tradeoff, with different aggregate criteria to assess disclosure risk and different measures of data utility, e.g., see Trottini and Fienberg (2002), Duncan and Stokes (2009), and Cox et al (2011). Few of these approaches measure up to the strictness of the differential privacy approach, and when differential privacy is overlaid upon them utility tends to be undermined.…”
Section: Technical Aspects Of Protecting Privacy and Protecting Confimentioning
confidence: 99%