2016
DOI: 10.1016/j.knosys.2016.03.004
|View full text |Cite
|
Sign up to set email alerts
|

Hierarchical anonymization algorithms against background knowledge attack in data releasing

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
36
0

Year Published

2017
2017
2020
2020

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 39 publications
(39 citation statements)
references
References 17 publications
0
36
0
Order By: Relevance
“…We present the detailed overview of the five datasets that were used in the experiments in Table 1. The first three datasets are publicly available [74,75], and the last two were created synthetically by respecting the attributes values distribution and percentage in real SN [76]. We assigned two SA (political views and online disease community affiliation) with distinct values to synthetically created datasets.…”
Section: Simulation Results and Discussionmentioning
confidence: 99%
“…We present the detailed overview of the five datasets that were used in the experiments in Table 1. The first three datasets are publicly available [74,75], and the last two were created synthetically by respecting the attributes values distribution and percentage in real SN [76]. We assigned two SA (political views and online disease community affiliation) with distinct values to synthetically created datasets.…”
Section: Simulation Results and Discussionmentioning
confidence: 99%
“…For preserving the privacy, the algorithms in privacy models practice different approaches. ese approaches can be categorized to (i) generalization [1][2][3][4][5][13][14][15] (i.e., greedily convert the more specialized values to less specialized values), (ii) anatomy [25,26] (i.e., partition the QI and S attributes), and (iii) microaggregation [29,30] (i.e., dataset is partitioned into clusters where QI values of records are replaced with the mean of value). e proposed work in this paper considers the syntactic data privacy, using generalization and anatomy for MSAs.…”
Section: Data Privacy Models Andmentioning
confidence: 99%
“…Different general purpose posteriori measures for utility and privacy loss [9,15,18,22] are available for generalization-based algorithms. In these approaches, the publisher does not know about the recipient analysing method.…”
Section: Experimental Analysismentioning
confidence: 99%
See 1 more Smart Citation
“…Specifically, most of the existing techniques assess the performance using traditional metrics only. Unlike those, the information loss is measured in terms of traditional metrics such as global certainty penalty [4,24], non-uniform entropy metric [25], normalized information loss [26,27], normalized certainty penalty [4], query error [11,28], sum of squared errors [29]. But, in proposed approach, Nayahi and Kavitha worked best to succeed in the anonymization using centroid-based replacement of QID values that is computationally superior to suppression in terms of information loss and less expensive than generalization.…”
Section: For Utility Preserving Data Clusteringmentioning
confidence: 99%