As advanced Cloud services are becoming mainstream, the contribution of data centers in the overall power consumption of modern cities is growing dramatically. The average consumption of a single data center is equivalent to the energy consumption of 25.000 households. Modeling the power consumption for these infrastructures is crucial to anticipate the effects of aggressive optimization policies, but accurate and fast power modeling is a complex challenge for high-end servers not yet satisfied by analytical approaches. This work proposes an automatic method, based on Multi-Objective Particle Swarm Optimization, for the identification of power models of enterprise servers in Cloud data centers. Our approach, as opposed to previous procedures, does not only consider the workload consolidation for deriving the power model, but also incorporates other non traditional factors like the static power consumption and its dependence with temperature. Our experimental results shows that we reach slightly better models than classical approaches, but simultaneously simplifying the power model structure and thus the numbers of sensors needed, which is very promising for a short-term energy prediction. This work, validated with real Cloud applications, broadens the possibilities to derive efficient energy saving techniques for Cloud facilities.
Summary Computational demand in data centers is increasing because of the growing popularity of Cloud applications. However, data centers are becoming unsustainable in terms of power consumption and growing energy costs so Cloud providers have to face the major challenge of placing them on a more scalable curve. Also, Cloud services are provided under strict Service Level Agreement conditions, so trade‐offs between energy and performance have to be taken into account. Techniques as Dynamic Voltage and Frequency Scaling (DVFS) and consolidation are commonly used to reduce the energy consumption in data centers, although they are applied independently and their effects on Quality of Service are not always considered. Thus, understanding the relationship between power, DVFS, consolidation, and performance is crucial to enable energy‐efficient management at the data center level. In this work, we propose a DVFS policy that reduces power consumption while preventing performance degradation, and a DVFS‐aware consolidation policy that optimizes consumption, considering the DVFS configuration that would be necessary when mapping Virtual Machines to maintain Quality of Service. We have performed an extensive evaluation on the CloudSim toolkit using real Cloud traces and an accurate power model based on data gathered from real servers. Our results demonstrate that including DVFS awareness in workload management provides substantial energy savings of up to 41.62% for scenarios under dynamic workload conditions. These outcomes outperforms previous approaches, that do not consider integrated use of DVFS and consolidation strategies.
Motivation. Computational demand on data centers is increasing due to the growing popularity of Cloud applications. However, data centers are becoming unsustainable in terms of power consumption and growing energy costs so they must be placed on a more scalable curve. Recently, there has been a growing interest in developing techniques to provide power management in Clouds. Dynamic Voltage and Frequency Scaling (DVFS) helps to reduce the consumption of underutilized resources dynamically, while consolidation strategies decrease significantly the static consumption by reducing the number of active servers, thus increasing their utilization. However, DVFS is traditionally applied locally, regardless the consolidation techniques. Understanding the relationship between power, DVFS and consolidation is crucial to enable new energy-efficient strategies that combine these effective techniques. To this purpose, the dependency of power on some traditionally ignored factors like frequency or static consumption, which are increasingly influencing the consumption patterns of these infrastructures, must now be considered. Furthermore, as Cloud services are provided under strict Service Level Agreement (SLA) conditions, power consumption in data centers may be minimized, taking into account a trade-off between DVFS and performance, without violating the SLA requirements whenever it is feasible. Also, Cloud workloads vary significantly over time, difficulting the optimal allocation of resources that requires a tradeoff between consolidation and performance. Therefore, the implementation of consolidation policies that are aware of both DVFS and energy consumption while considering QoS has the potential to optimize the sustainability of Cloud data centers.Proposed Solution. In this work we aim to find an energy optimization strategy for Cloud data centers that combines DVFS and consolidation techniques. Our policy is not only aware of the utilization of the incoming workload to be assigned, but also is conscious of the impact of its allocation in terms of frequency. One of the main challenges when designing data center optimizations is to implement fast algorithms that can be evaluated during runtime. For this reason, our research is focused on the design of an optimization algorithm that is simple in terms of computational requirements, in which both decision making and its execution in a real infrastructure are fast. The proposed algorithm is based on a
Data centers handle impressive high figures in terms of energy consumption, and the growing popularity of cloud applications is intensifying their computational demand. Moreover, the cooling needed to keep the servers within reliable thermal operating conditions also has an impact on the thermal distribution of the data room, thus affecting to servers' power leakage. Optimizing the energy consumption of these infrastructures is a major challenge to place data centers on a more scalable scenario. Thus, understanding the relationship between power, temperature, consolidation, and performance is crucial to enable an energy-efficient management at the data center level. In this research, we propose novel power and thermal-aware strategies and models to provide joint cooling and computing optimizations from a local perspective based on the global energy consumption of metaheuristic-based optimizations. Our results show that the combined awareness from both metaheuristic and best fit decreasing algorithms allow us to describe the global energy into faster and lighter optimization strategies that may be used during runtime. This approach allows us to improve the energy efficiency of the data center, considering both computing and cooling infrastructures, in up to a 21.74% while maintaining quality of service.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.