The popularization of cloud computing has raised concerns over the energy consumption that takes place in data centers. In addition to the energy consumed by servers, the energy consumed by large numbers of network devices emerges as a significant problem. Existing work on energy-efficient data center networking primarily focuses on traffic engineering, which is usually adapted from traditional networks. We propose a new framework to embrace the new opportunities brought by combining some special features of data centers with traffic engineering. Based on this framework, we characterize the problem of achieving energy efficiency with a time-aware model, and we prove its NP-hardness with a solution that has two steps. First, we solve the problem of assigning virtual machines (VM) to servers to reduce the amount of traffic and to generate favorable conditions for traffic engineering. The solution reached for this problem is based on three essential principles that we propose. Second, we reduce the number of active switches and balance traffic flows, depending on the relation between power consumption and routing, to achieve energy conservation. Experimental results confirm that, by using this framework, we can achieve up to 50% energy savings. We also provide a comprehensive discussion on the scalability and practicability of the framework.Index Terms-Data center networks, energy efficiency, virtual machine assignment, traffic engineering.
Energy consumption is a growing issue in data centers, impacting their economic viability and their public image. In this work we empirically characterize the power and energy consumed by different types of servers. In particular, in order to understand the behavior of their energy and power consumption, we perform measurements in different servers. In each of them, we exhaustively measure the power consumed by the CPU, the disk, and the network interface under different configurations, identifying the optimal operational levels. One interesting conclusion of our study is that the curve that defines the minimal CPU power as a function of the load is neither linear nor purely convex as has been previously assumed. Moreover, we find that the efficiency of the various server components can be maximized by tuning the CPU frequency and the number of active cores as a function of the system and network load, while the block size of I/O operations should be always maximized by applications. We also show how to estimate the energy consumed by an application as a function of some simple parameters, like the CPU load, and the disk and network activity. We validate the proposed approach by accurately estimating the energy of a map-reduce computation in a Hadoop platform.
Despite some proposals for energy-efficient topologies, most of the studies for saving energy in data center networks are focused on traffic engineering, i.e., consolidating flows and switching off unnecessary network devices. The major weakness of this approach is network oscillation brought by the frequent change of network topology when traffic fluctuates very fast. In this paper, we propose to incorporate rate adaptation into green data center networks. With rate adaptive network devices, we aim at approaching network-wide energy proportionality by routing optimization. We formalize the problem with an integer program and propose an efficient approximation algorithm -TSRR, solving the problem quickly while guaranteeing a constant performance ratio. Extensive range of simulations confirm that more than 40% of the energy can be saved while introducing very slight stretch on network delay. IEEE 12th International Symposium on Network Computing and Applications978-0-7695-5043-5/13 $26.00
Background: The shipping industry has grown spectacularly during the last 50 years transporting nowadays, approximately, the 80-90% of goods worldwide. However, maritime transport remains a highly inefficient industry. Only in the last 10-15 years has the industry started studying how to optimize navigation speeds and digitization is just entering ports. Consequently, institutions like the International Maritime Organization (IMO) are pressing towards the adoption of measures that increase the industry efficiency, like Just-In-Time (JIT) operations.Methods and Results: This paper shows why the Sea Traffic Management (STM) concept, based on stakeholder collaboration, is a JIT enabler. To do so, we analyze 1 year of navigation data of 33 ships, estimating the impact of JIT barriers on shipping and showing the benefits that the adoption of STM, with different maturity levels, could provide to the industry. Our evaluation shows that, only for containerships, STM can help reducing by 15-23% of fuel consumption and GHG emissions.
The bisection width of interconnection networks has always been important in parallel computing, since it bounds the speed at which information can be moved from one side of a network to another, i.e., the bisection bandwidth. Finding its exact value has proven to be challenging for some network families. For instance, the problem of finding the exact bisection width of the multidimensional torus was posed by Leighton [1, Problem 1.281] and has remained open for almost 20 years. We provide two general results that allow us to obtain upper and lower bounds on the bisection width of any product graph as a function of some properties of its factor graphs. The power of these results is shown by deriving the exact value of the bisection width of the torus, as well as of several d-dimensional classical parallel topologies that can be obtained by the application of the Cartesian product of graphs. We also apply these results to data centers, by obtaining bounds for the bisection bandwidth of the d-dimensional BCube network, a recently proposed topology for data centers.
Abstract. Motivated by current trends in cloud computing, we study a version of the generalized assignment problem where a set of virtual processors has to be implemented by a set of identical processors. For literature consistency, we say that a set of virtual machines (VMs) is assigned to a set of physical machines (PMs). The optimization criteria is to minimize the power consumed by all the PMs. We term the problem Virtual Machine Assignment (VMA). Crucial differences with previous work include a variable number of PMs, that each VM must be assigned to exactly one PM (i.e., VMs cannot be implemented fractionally), and a minimum power consumption for each active PM. Such infrastructure may be strictly constrained in the number of PMs or in the PMs' capacity, depending on how costly (in terms of power consumption) it is to add a new PM to the system or to heavily load some of the existing PMs. Low usage or ample budget yields models where PM capacity and/or the number of PMs may be assumed unbounded for all practical purposes. We study four VMA problems depending on whether the capacity or the number of PMs is bounded or not. Specifically, we study hardness and online competitiveness for a variety of cases. To the best of our knowledge, this is the first comprehensive study of the VMA problem for this cost function.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.