Motivation. Computational demand on data centers is increasing due to the growing popularity of Cloud applications. However, data centers are becoming unsustainable in terms of power consumption and growing energy costs so they must be placed on a more scalable curve. Recently, there has been a growing interest in developing techniques to provide power management in Clouds. Dynamic Voltage and Frequency Scaling (DVFS) helps to reduce the consumption of underutilized resources dynamically, while consolidation strategies decrease significantly the static consumption by reducing the number of active servers, thus increasing their utilization. However, DVFS is traditionally applied locally, regardless the consolidation techniques. Understanding the relationship between power, DVFS and consolidation is crucial to enable new energy-efficient strategies that combine these effective techniques. To this purpose, the dependency of power on some traditionally ignored factors like frequency or static consumption, which are increasingly influencing the consumption patterns of these infrastructures, must now be considered. Furthermore, as Cloud services are provided under strict Service Level Agreement (SLA) conditions, power consumption in data centers may be minimized, taking into account a trade-off between DVFS and performance, without violating the SLA requirements whenever it is feasible. Also, Cloud workloads vary significantly over time, difficulting the optimal allocation of resources that requires a tradeoff between consolidation and performance. Therefore, the implementation of consolidation policies that are aware of both DVFS and energy consumption while considering QoS has the potential to optimize the sustainability of Cloud data centers.Proposed Solution. In this work we aim to find an energy optimization strategy for Cloud data centers that combines DVFS and consolidation techniques. Our policy is not only aware of the utilization of the incoming workload to be assigned, but also is conscious of the impact of its allocation in terms of frequency. One of the main challenges when designing data center optimizations is to implement fast algorithms that can be evaluated during runtime. For this reason, our research is focused on the design of an optimization algorithm that is simple in terms of computational requirements, in which both decision making and its execution in a real infrastructure are fast. The proposed algorithm is based on a