At present, most studies on data publishing only considered single sensitive attribute, and the works on multiple sensitive attributes are still few. And almost all the existing studies on multiple sensitive attributes had not taken the inherent relationship between sensitive attributes into account, so that adversary can use the background knowledge about this relationship to attack the privacy of users. This paper presents an attack model with the association rules between the sensitive attributes and, accordingly, presents a data publication for multiple sensitive attributes. Through proof and analysis, the new model can prevent adversary from using the background knowledge about association rules to attack privacy, and it is able to get high-quality released information. At last, this paper verifies the above conclusion with experiments.
Most of the data publishing methods have not considered sensitivity protection, and hence the adversary can disclose privacy by sensitivity attack. Faced with this problem, this paper presents a medical data publishing method based on sensitivity determination. To protect the sensitivity, the sensitivity of disease information is determined by semantics. To seek the trade‐off between information utility and privacy security, the new method focusses on the protection of sensitive values with high sensitivity and assigns the highly sensitive disease information to groups as evenly as possible. The experiments are conducted on two real‐world datasets, of which the records include various attributes of patients. To measure sensitivity protection, the authors define a metric, which can evaluate the degree of sensitivity disclosure. Besides, additional information loss and discernability metrics are used to measure the availability of released tables. The experimental results indicate that the new method can provide better privacy than the traditional one while the information utility is guaranteed. Besides value protection, the proposed method can provide sensitivity protection and available releasing for medical data.
Spontaneous reporting systems (SRSs) are used to collect adverse drug events (ADEs) for their evaluation and analysis. Periodical SRS data publication gives rise to a problem where sensitive, private data can be discovered through various attacks. The existing SRS data publishing methods are vulnerable to Medicine Discontinuation Attack(MD-attack) and Substantial symptoms-attack(SS-attack). To remedy this problem, an improved periodical SRS data publishing—PPMS(k, θ, ɑ)-bounding is proposed. This new method can recognize MD-attack by ensuring that each equivalence group contains at least k new medicine discontinuation records. The SS-attack can be thwarted using a heuristic algorithm. Theoretical analysis indicates that PPMS(k, θ, ɑ)-bounding can thwart the above-mentioned attacks. The experimental results also demonstrate that PPMS(k, θ, ɑ)-bounding can provide much better protection for privacy than the existing method and the new method dose not increase the information loss. PPMS(k, θ, ɑ)-bounding can improve the privacy, guaranteeing the information usability of the released tables.
The overgeneralisation may happen because most studies on data publishing for multiple sensitive attributes (SAs) have not considered the personalised privacy requirement. Furthermore, sensitive information disclosure may also be caused by these personalised requirements. To address the matter, this article develops a personalised data publishing method for multiple SAs. According to the requirements of individuals, the new method partitions SAs values into two categories: private values and public values, and breaks the association between them for privacy guarantees. For the private values, this paper takes the process of anonymisation, while the public values are released without this process. An algorithm is designed to achieve the privacy mode, where the selectivity is determined by the sensitive value frequency and undesirable objects. The experimental results show that the proposed method can provide more information utility when compared with previous methods. The theoretic analyses and experiments also indicate that the privacy can be guaranteed even though the public values are known to an adversary. The overgeneralisation and privacy breach caused by the personalised requirement can be avoided by the new method.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.