Current day database applications, with large numbers of users, require fine-grained access control mechanisms, at the level of individual tuples, not just entire relations/views, to control which parts of the data can be accessed by each user. Fine-grained access control is often enforced in the application code, which has numerous drawbacks; these can be avoided by specifying/enforcing access control at the database level. We present a novel fine-grained access control model based on authorization views that allows "authorizationtransparent" querying; that is, user queries can be phrased in terms of the database relations, and are valid if they can be answered using only the information contained in these authorization views. We extend earlier work on authorization-transparent querying by introducing a new notion of validity, conditional validity. We give a powerful set of inference rules to check for query validity. We demonstrate the practicality of our techniques by describing how an existing query optimizer can be extended to perform access control checks by incorporating these inference rules.
Publish/subscribe systems have demonstrated the ability to scale to large numbers of users and high data rates when providing content-based data dissemination services on the Internet. However, their services are limited by the data semantics and query expressiveness that they support. On the other hand, the recent work on selective dissemination of XML data has made significant progress in moving from XML filtering to the richer functionality of transformation for result customization, but in general has ignored the challenges of deploying such XML-based services on an Internet-scale. In this paper, we address these challenges in the context of incorporating the rich functionality of XML data dissemination in a highly scalable system. We present the architectural design of ONYX, a system based on an overlay network. We identify the salient technical challenges in supporting XML filtering and transformation in this environment and propose techniques for solving them. IntroductionA large number of emerging applications, such as mobile services, stock tickers, sports tickers, personalized newspaper generation, network monitoring, traffic monitoring, and electronic auctions, has fuelled an increasing interest in ContentBased Data Dissemination (CBDD). CBDD is a service that delivers information to users (equivalently, applications or organizations) based on the correspondence between the content of the information and the user data interests. Figure 1 shows the context in which a data dissemination system providing this service operates. Users subscribe to the service by providing profiles expressing their data interests. Data sources publish their data by pushing messages to the system. The system delivers to each user the messages that match her data interests; these messages are presented in the format required by the user.Over the past few years, XML has rapidly gained popularity as the standard for data exchange in enterprise intranets and on the Internet. The ability to augment data with semantic and structural information using XML-based encoding raises the potential for more accurate and useful delivery of data. In the context of XML-based data dissemination, user profiles can involve constraints over both the structure and value of XML fragments, resulting in potentially more precise filtering of XML messages. In many emerging applications, the relevant XML messages also need to be transformed for data and application integration, personalization, and adaptation to wireless devices.While . Integrating XML processing into such distributed environments appears to be a natural approach to supporting large-scale XML dissemination. ChallengesDistributed pub/sub systems partition the profile population to multiple nodes and direct the message flow to the nodes hosting profiles based on the content of messages (referred to as content-driven routing). Integrating XML into contentdriven routing, however, brings the following key challenges. As XML mixes structural and value-based information, content-drive...
The emergence of large-scale receptor-based systems has enabled applications to execute complex business logic over data generated from monitoring the physical world. An important functionality required by these applications is the detection and response to complex events, often in real-time. Bridging the gap between low-level receptor technology and such high-level needs of applications remains a significant challenge.We demonstrate our solution to this problem in the context of HiFi, a system we are building to solve the data management problems of large-scale receptor-based systems. Specifically, we show how HiFi generates simple events out of receptor data at its edges and provides high-functionality complex event processing mechanisms for sophisticated event detection using a real-world library scenario.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.