As cloud applications proliferate and data-processing demands increase, server resources must grow to unleash the performance of emerging workloads that scale well with large number of compute nodes. Nevertheless, power has become a crucial bottleneck that restricts horizontal scaling (scale out) of server systems, especially in datacenters that employ power over-subscription. When a datacenter hits the maximum capacity of its power provisioning equipment, the owner has to either build another facility or upgrade existing utility power infrastructure -both approaches add huge capital expenditure, require significant construction lead time, and can further increase the owner's carbon footprint.This paper proposes Oasis, a power provisioning scheme for enabling power-/carbon-constrained datacenter servers to scale out economically and sustainably. Oasis naturally supports incremental power capacity expansion with near-zero environmental impact as it takes advantages of modular renewable energy system and emerging distributed battery architecture. It allows scale-out datacenter to double its capacity using 100% green energy with up to 25% less overhead cost. This paper also describes our implementation of Oasis prototype and introduces our multi-source driven power management scheme Ozone. Ozone allows Oasis to identify the most suitable power supply control strategies and adjust server load cooperatively to maximize overall system efficiency and reliability. Our results show that Ozone could reduce the performance degradation of Oasis to 1%, extend Oasis battery lifetime by over 50%, and almost triple the average battery backup capacity which is crucial for mission-critical systems. Figure 1: "How will your organization handle increased capacity demand over the next 12-18 months" -results from the Uptime Institute 2012 Data Center Industry Survey [3].
Virtualization technology is being widely adopted by servers and data centers in the cloud computing era to improve resource utilization and energy efficiency. Nevertheless, the heterogeneous memory demands from multiple virtual machines (VM) make it more challenging to design efficient memory systems. Even worse, mission critical VM management activities (e.g. checkpointing) could incur significant runtime overhead due to intensive IO operations. In this paper, we propose to leverage the adaptable and non-volatile features of the emerging phase change memory (PCM) to achieve efficient virtual machine execution. Towards this end, we exploit VM-aware PCM management mechanisms, which 1) smartly tune SLC/MLC page allocation within a single VM and across different VMs and 2) keep critical checkpointing pages in PCM to reduce I/O traffic. Experimental results show that our single VM design (IntraVM) improves performance by 10% and 20% compared to pure SLC-and MLC-based systems. Further incorporating VM-aware resource management schemes (IntraVM+InterVM) increases system performance by 15%. In addition, our design saves 46% of checkpoint/restore duration and reduces 50% of overall IO penalty to the system.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.