Abstract:The potential for logical inference of high level information based upon lower level visible data presents an interesting and challenging threat to multilevel security. Such compromises of security are rather novel since they circumvent traditional security mechanisms and rely on a user's knowledge of the application, which is external to the security layers of the system. The potential for such inferences, and the multiple consequences of a corrective action, substantially complicate the task of classifying t… Show more
“…Therefore, different probabilities should be assigned to different individuals to convey this probabilistic nature of identity inference. Denning and Morgenstern [24][25][26] were the first to use information entropy to predict the risk of such probabilistic inferences in multilevel databases. Given two data items x and y, let H(y) denote the entropy of y and H x (y) denote the conditional entropy of y given x.…”
Section: The Challenge Of Measuring Anonymitymentioning
In any situation where a set of personal attributes are revealed, there is a chance that revealed data can be linked back to its owner. Examples of such situations are publishing user profile micro-data or information about social ties, sharing profile information on social networking sites, or revealing personal information in computer-mediated communication. Measuring user anonymity is the first step to ensuring that the identity of the owner of revealed information cannot be inferred. Most current measures of anonymity ignore important factors such as the probabilistic nature of identity inference, the inferrer's outside knowledge, and the correlation between user attributes. Furthermore, in the social computing domain variations in personal information and various levels of information exchange among users make the problem more complicated. We present an information-entropy-based realistic estimation of the user anonymity level to deal with these issues in social computing in an effort to help predict \identity inference risks. We then address implementation issues of online protection by proposing complexity reduction methods that take advantage of basic information entropy properties. Our analysis and delay estimation based on experimental data show that our methods are viable, effective and efficient in facilitating privacy in social computing and synchronous computer-mediated communications.
“…Therefore, different probabilities should be assigned to different individuals to convey this probabilistic nature of identity inference. Denning and Morgenstern [24][25][26] were the first to use information entropy to predict the risk of such probabilistic inferences in multilevel databases. Given two data items x and y, let H(y) denote the entropy of y and H x (y) denote the conditional entropy of y given x.…”
Section: The Challenge Of Measuring Anonymitymentioning
In any situation where a set of personal attributes are revealed, there is a chance that revealed data can be linked back to its owner. Examples of such situations are publishing user profile micro-data or information about social ties, sharing profile information on social networking sites, or revealing personal information in computer-mediated communication. Measuring user anonymity is the first step to ensuring that the identity of the owner of revealed information cannot be inferred. Most current measures of anonymity ignore important factors such as the probabilistic nature of identity inference, the inferrer's outside knowledge, and the correlation between user attributes. Furthermore, in the social computing domain variations in personal information and various levels of information exchange among users make the problem more complicated. We present an information-entropy-based realistic estimation of the user anonymity level to deal with these issues in social computing in an effort to help predict \identity inference risks. We then address implementation issues of online protection by proposing complexity reduction methods that take advantage of basic information entropy properties. Our analysis and delay estimation based on experimental data show that our methods are viable, effective and efficient in facilitating privacy in social computing and synchronous computer-mediated communications.
“…Another key direction of research involves role-based access control. For a sampling of relevant literature on these topics, see [34,71,97,106,165,193,194,195,291,294,295,320,324,325,384,410,411].…”
This paper reviews applications in computer science that decision theorists have addressed for years, discusses the requirements posed by these applications that place great strain on decision theory/social science methods, and explores applications in the social and decision sciences of newer decision-theoretic methods developed with computer science applications in mind. The paper deals with the relation between computer science and decision-theoretic methods of consensus, with the relation between computer science and game theory and decisions, and with "algorithmic decision theory."
“…Detecting and preventing the disclosure of sensitive data via inference channels is referred to as the inference problem [9]. Solutions to the inference problem can be categorized as either a database design [2,3,7,8,11,14,15,17,18,21] or a query processing [4,10,12,13,16,19] solution.…”
Abstract. The Dynamic Disclosure Monitor (D2 Mon) is a security mechanism that executes during query processing time to prevent sensitive data from being inferred. A limitation of D 2 Mon is that it unnecessarily examines the entire history database in computing inferences. In this paper, we present a process that can be used to reduce the number of tuples that must be examined in computing inferences during query processing time. In particular, we show how a priori knowledge of a database dependency can be used to reduce the search space of a relation when applying database dependencies. Using the database dependencies, we develop a process that forms an index table into the database that identifies those tuples that can be used in satisfying database dependencies. We show how this process can be used to extend D 2 Mon to reduce the number of tuples that must be examined in the history database when computing inferences. We further show that inferences that are computed by D 2 Mon using our extension are sound and complete.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.