Abstract. Cloud computing with its inherent advantages draws attention for business critical applications, but concurrently expects high level of trust in cloud service providers. Reputation-based trust is emerging as a good choice to model trust of cloud service providers based on available evidence. Many existing reputation based systems either ignore or give less importance to uncertainty linked with the evidence. In this paper, we propose an uncertainty model and define our approach to compute opinion for cloud service providers. Using subjective logic operators along with the computed opinion values, we propose mechanisms to calculate the reputation of cloud service providers. We evaluate and compare our proposed model with existing reputation models.Keywords: Cloud, Trust, Reputation, SLA, Subjective logic. IntroductionCloud computing has been recognised as an important new paradigm to support small and medium size businesses and general IT applications. The advantages of Cloud computing are multifold including better use and sharing of IT resources, unlimited scalability and flexibility, high level of automation, reduction of computer and software costs, and access to several services. However, despite the advantages and rapid growth of Cloud computing, it brings several security, privacy and trust issues that need immediate action. Trust is an important concept for cloud computing given the need for consumers in the cloud to select cost effective, trustworthy, and less risky services [2]. The issue of trust is also important for service providers to decide on the infrastructure provider that can comply with their needs, and to verify if the infrastructure providers maintain their agreements during service deployment. The work presented in this paper is being developed under the FP7 EU-funded project called OPTIMIS [5][13] to support organisations to externalise services and applications to trustworthy cloud providers. More specifically, the project focuses on service and infrastructure providers. One of the main goals of OPTIMIS is to develop a toolkit to assist cloud service providers to supply optimised services based on four different aspects, namely trust, risk, eco-efficiency, and cost. As part of the overall goal in OPTIMIS, this paper, describes a trust model to support service providers (SP) to verify trustworthiness of infrastructure providers (IP) during deployment and operational phases of the services supplied by the service providers.
Abstract-The cloud based delivery model for IT resources is revolutionizing the IT industry. Despite the marketing hype around "the cloud", the paradigm itself is in a critical transition state from the laboratories to mass market. Many technical and business aspects of cloud computing need to mature before it is widely adopted for corporate use. For example, the inability to seamlessly burst between internal cloud and external cloud platforms, termed cloud bursting, is a significant shortcoming of current cloud solutions. Furthermore, the absence of a capability that would allow to broker between multiple cloud providers or to aggregate them into a composite service inhibits the free and open competition that would help the market mature. This paper describes the concepts of cloud bursting and cloud brokerage and discusses the open management and security issues associated with the two models. It also presents a possible architectural framework capable of powering the brokerage based cloud services that is currently being developed in the scope of OPTIMIS, an EU FP7 project.
Abstract-We propose a cloud contextualization mechanism which operates in two stages, contextualization of VM images prior to service deployment (PaaS level) and selfcontextualization of VM instances created from the image (IaaS level). The contextualization tools are implemented as part of the OPTIMIS Toolkit, a set of software components for simplified management of cloud services and infrastructures. We present the architecture of our contextualization tools and the feasibility of our contextualization mechanism is demonstrated in a threetier web application scenario. Preliminary performance results suggest acceptable performance and scalability of our prototype.
Abstract. The concept of an Ephemerizer system has been introduced in earlier works as a mechanism to ensure that a file deleted from the persistent storage remains unrecoverable. The principle involved storing the data in an encrypted form in the user's machine and the key to decrypt the data in a physically separate machine. However the schemes proposed so far do not provide support for fine-grained user settings on the lifetime of the data nor support any mechanism to check the integrity of the system that is using the secret data. In addition we report the presence of a vulnerability in one version of the proposed scheme that can be exploited by an attacker to nullify the ephemeral nature of the keys. We propose and discuss in detail an alternate Identity Based cryptosystem powered scheme that overcomes the identified limitations of the original system.
Abstract. Low-cost sensors offer an attractive solution to the challenge of establishing affordable and dense spatio-temporal air quality monitoring networks with greater mobility and lower maintenance costs. These low-cost sensors offer reasonably consistent measurements but require in-field calibration to improve agreement with regulatory instruments. In this paper, we report the results of a deployment and calibration study on a network of six air quality monitoring devices built using the Alphasense O3 (OX-B431) and NO2 (NO2-B43F) electrochemical gas sensors. The sensors were deployed in two phases over a period of 3 months at sites situated within two megacities with diverse geographical, meteorological and air quality parameters. A unique feature of our deployment is a swap-out experiment wherein three of these sensors were relocated to different sites in the two phases. This gives us a unique opportunity to study the effect of seasonal, as well as geographical, variations on calibration performance. We report an extensive study of more than a dozen parametric and non-parametric calibration algorithms. We propose a novel local non-parametric calibration algorithm based on metric learning that offers, across deployment sites and phases, an R2 coefficient of up to 0.923 with respect to reference values for O3 calibration and up to 0.819 for NO2 calibration. This represents a 4–20 percentage point increase in terms of R2 values offered by classical non-parametric methods. We also offer a critical analysis of the effect of various data preparation and model design choices on calibration performance. The key recommendations emerging out of this study include (1) incorporating ambient relative humidity and temperature into calibration models; (2) assessing the relative importance of various features with respect to the calibration task at hand, by using an appropriate feature-weighing or metric-learning technique; (3) using local calibration techniques such as k nearest neighbors (KNN); (4) performing temporal smoothing over raw time series data but being careful not to do so too aggressively; and (5) making all efforts to ensure that data with enough diversity are demonstrated in the calibration algorithm while training to ensure good generalization. These results offer insights into the strengths and limitations of these sensors and offer an encouraging opportunity to use them to supplement and densify compliance regulatory monitoring networks.
Traditionally, the process of online digital content distribution has involved a limited number of centralised distributors selling protected contents and licenses authorising the use of these contents, to consumers. In this paper, we extend this model by introducing a security scheme that enables DRM preserving digital content redistribution. Essentially consumers can not only buy the rights to use digital content but also the rights to redistribute it to other consumers in a DRM controlled fashion. We examine the threats associated with such a redistribution model and explain how our scheme addresses them.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.