Distributed software environments are increasingly complex and dicult to manage, as they integrate various legacy software with specic management interfaces. Moreover, the fact that management tasks are performed by humans leads to many conguration errors and low reactivity. This is particularly true in medium or large-scale distributed infrastructures.To address this issue, we explore the design and implementation of an autonomic management system. The main principle is to wrap legacy software pieces in components in order to administrate a software infrastructure as a component architecture. However, we observed that the interfaces of a component model are too low-level and dicult to use. Therefore, we introduced higher-level formalisms for the specication of deployment and management policies. This paper overviews these specication facilities that are provided in the Tune autonomic management system.
Open Archive Toulouse Archive OuverteOATAO is an open access repository that collects the work of Toulouse researchers and makes it freely available over the web where possible
To cite this version: CauxAbstract. Nowadays, datacenters are one of the most energy consuming facilities due to the increase of cloud, web-services and high performance computing demands all over the world. To be clean and to be with no connection to the grid, datacenters projects try to feed electricity with renewable energy sources and storage elements. Nevertheless, due to the intermittent nature of these power sources, most of the works still rely on grid as a backup. This paper presents a model that considers the datacenter workload and the several moments where renewable energy could be engaged by the power side without grid. We propose to optimize the IT scheduling to execute tasks within a given power envelope of only renewable energy as a constraint.
International audienceEnergy savings are among the most important topics concerning Cloud and HPC infrastructuresnowadays. Servers consume a large amount of energy, even when their computingpower is not fully utilized. These static costs represent quite a concern, mostlybecause many datacenter managers are over-provisioning their infrastructures comparedto the actual needs. This results in a high part of wasted power consumption. In thispaper, we proposed the BML (“Big, Medium, Little”) infrastructure, composed of heterogeneousarchitectures, and a scheduling framework dealing with energy proportionality.We introduce heterogeneous power processors inside datacenters as a way to reduceenergy consumption when processing variable workloads. Our framework brings an intelligentutilization of the infrastructure by dynamically executing applications on thearchitecture that suits their needs, while minimizing energy consumption. In this paperwe focus on distributed stateless web servers scenario and we analyze the energy savingsachieved through energy proportionality
We propose in this paper to study the energy-, thermal-and performance-aware resource management in heterogeneous datacenters. Witnessing the continuous development of heterogeneity in datacenters, we are confronted with their different behaviors in terms of performance, power consumption and thermal dissipation: Indeed, heterogeneity at server level lies both in the computing infrastructure (computing power, electrical power consumption) and in the heat removal systems (different enclosure, fans, thermal sinks). Also the physical locations of the servers become important with heterogeneity since some servers can (over)heat others. While many studies address independently these parameters (most of the time performance and power or energy), we show in this paper the necessity to tackle all these aspects for an optimal resource management of the computing resources. This leads to improved energy usage in a heterogeneous datacenter including the cooling of the computer rooms. We build our approach on the concept of heat distribution matrix to handle the mutual influence of the servers, in heterogeneous environments, which is novel in this context. We propose a heuristic to solve the server placement problem and we design a generic greedy framework for the online scheduling problem. We derive several single-objective heuristics (for performance, energy, cooling) and a novel fuzzy-based priority mechanism to handle their tradeoffs. Finally, we show results using extensive simulations fed with actual measurements on heterogeneous servers.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.