2020
DOI: 10.1016/j.cose.2020.101753
|View full text |Cite
|
Sign up to set email alerts
|

Optimization-based k-anonymity algorithms

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
9
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 24 publications
(20 citation statements)
references
References 30 publications
0
9
0
Order By: Relevance
“…The anonymized database could be studied in place of the original database. Some common data anonymization models to prevent privacy disclosure include k-anonymity [10][11][12][13][14], l-diversity [9], t-closeness [15] and -presence [16]. a) k-anonymity: k-anonymity was developed to address identity disclosure.…”
Section: Data Anonymizationmentioning
confidence: 99%
See 1 more Smart Citation
“…The anonymized database could be studied in place of the original database. Some common data anonymization models to prevent privacy disclosure include k-anonymity [10][11][12][13][14], l-diversity [9], t-closeness [15] and -presence [16]. a) k-anonymity: k-anonymity was developed to address identity disclosure.…”
Section: Data Anonymizationmentioning
confidence: 99%
“…In k-anonymity, any individual cannot be reidentified from the published data with a probability of higher than 1/k. Other variations of k-anonymity include clustering anonymity [11], distribution-preserving k-anonymity [12], optimizationbased k-anonymity [13], -sensitive-k-anonymity [14], (X,Y)-anonymity [17], (α, k)anonymity [18], LKC-privacy [19] and random k-anonymous [20] which prevent identity disclosure by hiding the record of a target in an equivalence class of records with the same QID values. Although k-anonymity model protects against identity disclosure, it is vulnerable against attribute disclosure.…”
Section: Data Anonymizationmentioning
confidence: 99%
“…2. This error, defined for a couple (o, d), corresponds to the individual information loss [2], which independently penalize generalization of each attribute. For OD trips, we only have two attributes namely their origin and destination and we may measure by |o| and |d| their spatial generalizations in number of tiles.…”
Section: Problem Settingmentioning
confidence: 99%
“…Data protection approaches rely mostly on k-anonymization, which is achieved when all users in the data are indistinguishable from at least k − 1 other users. k-anonymization is usually attained through generalization and suppression [1,2], i.e., replacing values with less specific but semantically consistent values shared with other users, and deleting outlier users, respectively. However, k-anonymization of whole trajectories proves to be difficult to achieve for k > 5 and struggle to offer a truly foolproof anonymization [3].…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation