Nowadays enormous amounts of energy are consumed by Cloud infrastructures and this trend is still growing. An existing solution to lower this consumption is to turn off as many servers as possible, but these solutions do not involve the user as a main lever to save energy. We introduce a system that proposes to the user to run her application with degraded performance. A user choosing an energy-efficient run promotes a better consolidation of the Virtual Machines in the Cloud and thus may help turning off more servers. We experimented our system on Grid'5000 and we used the Montage workflow as a benchmark. Experimentation results show promising outcomes. In energyefficiency mode, the energy consumed can be significantly reduced to the cost of a low increase of the execution time.
Services offered by cloud computing are convenient to users for reasons such as their ease of use, flexibility, and financial model. Yet data centres used for their execution are known to consume massive amounts of energy. The growing resource utilisation following the cloud success highlights the importance of the reduction of its energy consumption. This paper investigates a way to reduce the footprint of HPC cloud users by varying the size of the virtual resources they request. We analyse the influence of concurrent applications with different resources sizes on the system energy consumption. Simulation results show that resources with larger size are more energy consuming regardless of faster applications' completion. Although smaller-sized resources offer energy savings, it is not always favourable in terms of energy to reduce too much the size. High energy savings depend on the user profiles' distribution.
Abstract:CO2 emissions related to Cloud computing reach nowadays worrying levels, without any reduction in sight. Often, Cloud users, asking for virtual machines, are not aware of such emissions which concern the entire Cloud infrastructures and are thus difficult to split into the actual resources utilization, such as virtual machines. We propose a CO2 emissions accounting framework giving flexibility to the Cloud providers, predictability to the users and allocating all the carbon costs to the users. This paper shows the architecture of our accounting framework and ideas on how to practically implement it.
Abstract-Cloud computing has become an attractive and easyto-use solution for users who want to externalize the run of their applications. However, data centers hosting cloud systems consume enormous amounts of energy. Reducing this consumption becomes an urgent challenge with the rapid growth of cloud utilization. In this paper, we explore a way for energy-aware HPC cloud users to reduce their footprint on cloud infrastructures by reducing the size of the virtual resources they are asking for. We study the influence of green users on the system energy consumption and compare it with the consumption of more aggressive users in terms of resource utilization. We found that larger resources are more energy demanding even if they are faster in executing the applications. But, reducing too much the resources' size is also not beneficial for the energy consumption. A tradeoff lies in between these two options.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.