Power consumption is one of the most critical problems in data centers. One effective way to reduce power consumption is to consolidate the hosting workloads and shut down physical machines which become idle after consolidation. Server consolidation is a NP-hard problem. In this paper, a new algorithms Dynamic Round-Robin (DRR), is proposed for energy-aware virtual machine scheduling and consolidation.We compare this strategy with the GREEDY, ROUNDROBIN and POWERSAVE scheduling strategies implemented in the Eucalyptus Cloud system. Our experiment results show that the Dynamic Round-Robin algorithm reduce a significant amount of power consumption compared with the three strategies in Eucalyptus.
Abstract-This paper studies the QoS-aware replica placement problem. Although there has been much work on replica placement problem, most of them concerns average system performance and ignores quality assurance issue. Quality assurance is very important, especially in heterogeneous environments. We propose a new heuristic algorithm that determines the positions of replicas in order to satisfy the quality requirements imposed by data requests. The experimental results indicate that the proposed algorithm finds a near-optimal solution effectively and efficiently for algorithm can also adapt to various parallel and distributed environments.
AbstractÐThis paper describes our experiences developing high-performance code for astrophysical N-body simulations. Recent N-body methods are based on an adaptive tree structure. The tree must be built and maintained across physically distributed memory; moreover, the communication requirements are irregular and adaptive. Together with the need to balance the computational work-load among processors, these issues pose interesting challenges and tradeoffs for high-performance implementation. Our implementation was guided by the need to keep solutions simple and general. We use a technique for implicitly representing a dynamic global tree across multiple processors which substantially reduces the programming complexity as well as the performance overheads of distributed memory architectures. The contributions include methods to vectorize the computation and minimize communication time which are theoretically and experimentally justified. The code has been tested by varying the number and distribution of bodies on different configurations of the Connection Machine CM-5. The overall performance on instances with 10 million bodies is typically over 48 percent of the peak machine rate, which compares favorably with other approaches.Index TermsÐN-body simulations, parallel processing, Barnes-Hut algorithm, adaptive tree structure, Peano-Hilbert space filling curve.
An energy conservation strategy must address two issues -placement of virtual machine images and workload characteristics of virtual machines. For performance reason most cloud systems copy a prototype image into the local disk of a physical machine before starting a virtual machine. If the physical machine that stores the image of a virtual machine is off-line, then we cannot run this virtual machine. The workload characteristics of virtual machines determine whether it is data-intensive or CPU-intensive. We assume that the system has a distributed file system therefore a physical machine can run any virtual machine even if it does not have the image. However, we observe that the performance of a data-intensive virtual machine running on a physical machine without its image could result in 60% performance loss compared with running the same virtual machine on a physical machine that has the virtual machine image. On the other hand, the performance of a CPU-intensive virtual machine is almost independent of whether the physical machine has the image or not. As a result, an energy conservation algorithm must consider the workload characteristic of a virtual machine when finding a physical machine to run it, especially for data-intensive virtual machines. This paper proposes a workload characteristics-aware virtual machine consolidation algorithms. We propose an approximation algorithm and two dynamic programmings to consolidate virtual machines and reduce the number of physical machines. We conduct experiments and compare the numbers of physical machines used by our approximation algorithm with the optimal number of physical machines found by our dynamic programming. The experiment results indicate that our approximation algorithm finds good solutions much faster than the dynamic programming.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.