The demand for data center computing increased significantly in recent years resulting in huge energy consumption. Data centers typically comprise three main subsystems: IT equipment provides services to customers; power infrastructure supports the IT and cooling equipment; and the cooling infrastructure removes the generated heat. This work presents a novel approach to model the energy flows in a data center and optimize its holistic operation. Traditionally, supply-side constraints such as energy or cooling availability were largely treated independently from IT workload management. This work reduces cost and environmental impact using a holistic approach that integrates energy supply, e.g., renewable supply and dynamic pricing, and cooling supply, e.g., chiller and outside air cooling, with IT workload planning to improve the overall attainability of data center operations. Specifically, we predict renewable energy as well as IT demand and design an IT workload management plan that schedules IT workload and allocates IT resources within a data center according to time varying power supply and cooling efficiency. We have implemented and evaluated our approach using traces from real data centers and production systems. The results demonstrate that our approach can reduce the recurring power costs and the use of non-renewable energy by as much as 60% compared to existing, non-integrated techniques, while still meeting operational goals and Service Level Agreements.
Improving the cooling efficiency of servers has become an essential requirement in data centers today as the power used to cool the servers has become an increasingly large component of the total power consumption. Additionally, fan speed control has emerged in recent years as a critical part of system thermal architecture. However, the state of the art in server fan control often results in over provisioning of air flow that leads to high fan power consumption. It can be exacerbated in server architectures that share cooling resources among server components, where single hot spot can often drive the operation of a multiplicity of fans. To address this problem, this paper presents a novel multi-input multi-output (MIMO) fan controller that utilizes thermal models developed from first-principles to manipulate the operation of fans. The controller tunes the speeds of individual fans proactively based on prediction of the sever temperatures. Experimental results show that, with fans controlled by the optimal controller, over-provisioning of cooling air is eliminated, temperatures are more tightly controlled and fan energy consumption is reduced by up to 20% compared to that with a zone-based feedback controller.
Large-scale data centers (~20,000m2 ) will be the major energy consumers of the next generation. The trend towards deployment of computer systems in large numbers, in very dense configurations in racks in a data center, has resulted in very high power densities at room level. Due to high heat loads (~3MWs) in an interconnected environment, data center design based on simple energy balance with zones, is inadequate. Energy consumption of data centers can be severely increased by inadequate air handling systems and rack layouts that allow the hot and cold air streams to mix. In this paper, for the first time, we formulate nondimensional parameters to evaluate the thermal design and performance of large-scale data centers. The parameters, based on temperature and flow data, reflect the convective heat transfer and fluid flow inside the data center. These parameters have been formulated as indices that are scalable from rack level to data center level. To provide a proof of concept, computational fluid dynamic models of data centers are used to validate and demonstrate these indices. A first level design of experiment study is carried out to understand the effect of geometry and data center workload on the parameters. Different data center configurations are also investigated to understand the effectiveness of these parameters in specific cases. These parameters will not only provide an invaluable tool to understand convective heat transfer in large data centers but also suggest means to improve energy efficiency in data centers.
Motivation
Reduction of resource consumption in data centers is becoming a growing concern for data center designers, operators and users. Accordingly, interest in the use of renewable energy to provide some portion of a data center's overall energy usage is also growing. One key concern is that the amount of renewable energy necessary to satisfy a typical data center's power consumption can lead to prohibitively high capital costs for the power generation and delivery infrastructure, particularly if on-site renewables are used. In this paper, we introduce a method to operate a data center with renewable energy that minimizes dependence on grid power while minimizing capital cost. We achieve this by integrating data center demand with the availability of resource supplies during operation. We discuss results from the deployment of our method in a production data center.
scite is a Brooklyn-based startup that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.