I. ABSTRACTThe practical advantages of pay-as-you-go, scalable com puting have made large-scale cloud computing services an appealing option for many consumers. At the same time, large-scale datacenters have attracted attention as one of the fastest growing segments of carbon production. In this paper, we attempt to quantify the footprint of various sizes of datacenters in the context of two popular types of small scale business applications (represented by TPC-C and TPC H). We evaluate energy, materials and cost as systems scale, accounting for infrastructure, provisioning for future growth, and underutilized resources.
Current resource provisioning schemes in Internet services leave servers less than 50% utilized almost all the time. At this level of utilization, the servers' energy efficiency is substantially lower than at peak utilization. A solution to this problem could be dynamically consolidating workloads into fewer servers and turning others off. However, services typically resist doing so, because of high response times during re-activation in handling traffic spikes. Moreover, services often want the memory and/or storage of all servers to be readily available at all times.In this paper, we propose a family of barely-alive active low-power server states that facilitates both fast re-activation and access to memory while in a low-power state. We compare these states to previously proposed active and idle states. In particular, we investigate the impact of load bursts in each energy-saving scheme. We also evaluate the additional benefits of memory access under low-power states with a study of a search service using a cooperative main-memory cache. Finally, we further investigate our barely-alive states in two case studies: (1) a mixed system that combines a barely-alive state with the off state to maximize energy savings; and (2) a barely-alive system that facilitates hosting more than one service at the same time. We find that the barely-alive states can reduce service energy consumption by up to 38%, compared to an energy-oblivious system. These energy savings are consistent across a large parameter space. In the presence of two services, the barely-alive system can reduce energy consuption by 34% over a two-system deployment.
Current resource provisioning schemes in Internet services leave servers less than 50% utilized almost all the time. At this level of utilization, the servers' energy efficiency is substantially lower than at peak utilization. A solution to this problem could be dynamically consolidating workloads into fewer servers and turning others off. However, services typically resist doing so, because of high response times during reactivation in handling traffic spikes. Moreover, services often want the memory and/or storage of all servers to be readily available at all times.
In this article, we propose a family of
barely alive
active low-power server states that facilitates both fast reactivation and access to memory while in a low-power state. We compare these states to previously proposed active and idle states. In particular, we investigate the impact of load bursts in each energy-saving scheme. We also evaluate the additional benefits of memory access under low-power states with a study of a search service using a cooperative main-memory cache. Finally, we propose a system that combines a barely-alive state with the off state. We find that the barely alive states can reduce service energy consumption by up to 38%, compared to an energy-oblivious system. We also find that these energy savings are consistent across a large parameter space.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.