-Cloud computing offers the potential to dramatically reduce the cost of software services through the commoditization of information technology assets and ondemand usage patterns. However, the complexity of determining resource provision policies for applications in such complex environments introduces significant inefficiencies and has driven the emergence of a new class of infrastructure called Platform-as-a-Service (PaaS). In this paper, we present a novel PaaS architecture being developed in the EU IST IRMOS project targeting real-time Quality of Service (QoS) guarantees for online interactive multimedia applications. The architecture considers the full service lifecycle including service engineering, service level agreement design, provisioning and monitoring. QoS parameters at both application and infrastructure levels are given specific attention as the basis for provisioning policies in the context of temporal constraints. The generic applicability of the architecture is being verified and validated through implemented scenarios from three important application sectors (film post-production, virtual augmented reality for engineering design, collaborative e-Learning in virtual worlds).
The transition from laboratory science to in silico e-science has facilitated a paradigmatic shift in the way we conduct modern science. We can use computationally based analytical models to simulate and investigate scientific questions such as those posed by high-energy physics and bioinformatics, yielding high-quality results and discoveries at an unprecedented rate. However, while experimental media have changed, the scientific methodologies and processes we choose for conducting experiments are still relevant. As in the lab environment, experimental methodology requires samples (or in this case, data) to undergo several processing stages. The staging of operations is what constitutes the in silico experimental process.Initial bioinformatics experiments typically required passing data through several programs in sequence. We'd format the data to conform to applicationdependent file formats and then pass it through selected scientific applications or services, which would yield a handful of results or generate new data. This new data would in turn require reformatting and passing through other services. Often, a bioinformatician would have to manually transfer results between services by noting these values and rekeying them into a new interface or by cutting and pasting. Although problematic and error prone, this approach facilitated scientific exploration through experimentation with different hypotheses using different services. This service-oriented approach underpins emerging technologies such as Web Services and the Grid.The use of workflows formalizes earlier ad hoc approaches for representing experimental methodology. We can represent the stages of in silico experiments formally as a set of services to invoke. Although this formalization can simplify the representation of experimental methodology, referring to specific services limits the utility, portability, and scalability of such workflows. They're prone to the removal or modification of any of the services on which they depend. We can't readily share workflows with colleagues or execute them on other computer infrastructures unless the same services exist on the new infrastructure. Even in an open, shared-services environment, several scientists invoking the same workflow would result in service contention, because each workflow would require the same instances. Additionally, social and human factors add further constraints: to preserve their intellectual property, scientists prefer to publish their experiments' structure while keeping the invoked service instances' details private.By abstracting the workflows, we can construct workflow templates representing the type or class of service to invoke at each experimental stage, without specifying which instance of the service should be used. To use a template, we instantiate the abstracted service representations according to the available services and then manage the data flow appropriately to ensure interoperation between the services. In this article, we address how to use workflow resolution to p...
Abstract-With increasing availability of Cloud computing services, this paper addresses the challenge consumers of Infrastructure-as-a-Service (IaaS) have in determining which IaaS provider and resources are best suited to run an application that may have specific Quality of Service (QoS) requirements. Utilising application modelling to predict performance is an attractive concept, but is very difficult with the limited information IaaS providers typically provide about the computing resources. This paper reports on an initial investigation into using Dwarf benchmarks to measure the performance of virtualised hardware, conducting experiments on BonFIRE and Amazon EC2. The results we obtain demonstrate that labels such as 'small', 'medium', 'large' or a number of ECUs are not sufficiently informative to predict application performance, as one might expect. Furthermore, knowing the CPU speed, cache size or RAM size is not necessarily sufficient either as other complex factors can lead to significant performance differences. We show that different hardware is better suited for different types of computations and, thus, the relative performance of applications varies across hardware. This is reflected well by Dwarf benchmarks and we show how different applications correlate more strongly with different Dwarfs, leading to the possibility of using Dwarf benchmark scores as parameters in application models.
Abstract-In this paper we focus on how Quality of Service guarantees are provided to virtualised applications in the Cloud Computing infrastructure that is being developed in the context of the IRMOS 1 European Project. Provisioning of proper timeliness guarantees to distributed real-time applications involves the careful use of real-time scheduling mechanisms at the virtual-machine hypervisor level, of QoS-aware networking protocols and of proper design methodologies and tools for stochastic modelling of the application. The paper focuses on how we applied these techniques to a case-study involving a real eLearning mobile content delivery application that has been integrated into the IRMOS platform and its achieved performance.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.