The expense of power cost in server farms has driven the recent power-aware development in both industry and academia. At the same time, a Service Level Agreement (SLA) of service performance between a customer and a service provider is demanded to meet the customer satisfaction. This paper investigates the queueing-theoretical power-saving design strategy for server farms under a given SLA, which in particular is measured in a certain level percentile of the job response time. We consider server farms with servers that are equipped with the capabilities of Dynamic Voltage Scaling (DVFS) and Dynamic Power Management (DPM). We adopt an M/G/1/PS server model, where the job service time distribution is assumed heavytailed, as discovered and validated by previous research. We propose a design strategy called PowerTail to minimize the power consumption under the given SLA. Our data confirms that the proposed PowerTail strategy indeed provides statistical guarantee in comparison with existing dynamic DVFS approaches and significantly outperforms the intuitive load-balancing strategy. 1 Our methodology can be applied to other job scheduling algorithms such as First-Come-First-Served (FCFS) as well.
The number and diversity of cores in on-chip systems is increasing rapidly. However, due to the Thermal Design Power (TDP) constraint, it is not possible to continuously operate all cores at the same time. Exceeding the TDP constraint may activate the Dynamic Thermal Management (DTM) to ensure thermal stability. Such hardware based closed-loop safeguards pose a big challenge in using many-core chips for real-time tasks. Managing the worst-case peak power usage of a chip can help toward resolving this issue. We present a scheme to minimize the peak power usage for frame-based and periodic real-time tasks on many-core processors by scheduling the sleep cycles for each active core and introduce the concept of a sufficient test for peak power consumption for task feasibility. We consider both inter-task and inter-core diversity in terms of power usage and present computationally efficient algorithms for peak power minimization for these cases, i.e., a special case of "homogeneous tasks on homogeneous cores" to the general case of "heterogeneous tasks on heterogeneous cores". We evaluate our solution through extensive simulations using the 48-core SCC platform and gem5 architecture simulator. Our simulation results show the efficacy of our scheme.
Demand of electricity varies on hourly basis whereas the production is quite inelastic. This results in fluctuating prices. Data centers and industrial consumers of electricity are penalized for the peak power demand of their loads. To shave the peak power demand, a battery buffer can be adopted. The battery is charged during low load and is discharged during peaks. One essential question is to analyze the reduction of the peak power demand by adopting battery buffers. The power loads are modeled in this paper by adopting the concept of arrival curves in Network Calculus. We analyze monotonic controllers, which have these two properties: (1) comparing one given trace of power loads and two initial battery statuses, if we start from higher battery status, the resulting battery status in the future will not become lower; (2) to increase the power demand at time slot t, the power loads released before t should be as close as possible to t. We present a simple and effective monotonic controller and also provide analyses for the peak power demand to the power grid. Our analysis mechanism can help determine the appropriate battery size for a given load arrival curve to reduce the peak.
Server farms suffer from an increasing power consumption nowadays. Power saving has become a prominent design issue in server farms. This paper presents a power-saving design in server farms under the constraint of the response time. In particular, we target on multi-tier applications, which are very typical on the web in modern days. We propose an efficient power-saving design strategy, called PowerTier. This strategy exploits two major techniques by using Dynamic Power management (DPM) to activate/deactivate servers and using Dynamic Voltage Scaling (DVS) to adjust the processor speed for each activated server. In addition, PowerTier considers two different application models: the open-queueing model and the closed-queueing model for session-less and session-based web applications respectively. With PowerTier, we are able to choose the number of activated servers at each tier and the processor speed for each server to minimize the overall power consumption in server farms while meeting a given mean response time guarantee for multi-tier applications. Our comprehensive simulation confirms the effectiveness and efficiency of PowerTier.
Chip manufacturers provide the Thermal Design Power (TDP) for a specific chip. The cooling solution is designed to dissipate this power level. But because TDP is not necessarily the maximum power that can be applied, chips are operated with Dynamic Thermal Management (DTM) techniques. To avoid excessive triggers of DTM, usually, system designers also use TDP as power constraint. However, using a single and constant value as power constraint, e.g., TDP, can result in big performance losses in many-core systems. Having better power budgeting techniques is a major step towards dealing with the dark silicon problem.This paper presents a new power budget concept, called Thermal Safe Power (TSP), which is an abstraction that provides safe power constraint values as a function of the number of simultaneously operating cores. Executing cores at any power consumption below TSP ensures that DTM is not triggered. TSP can be computed offline for the worst cases, or online for a particular mapping of cores. Our simulations show that using TSP as power constraint results in 50.5% and 14.2% higher average performance, compared to using constant power budgets (both per-chip and per-core) and a boosting technique, respectively. Moreover, TSP results in dark silicon estimations which are more optimistic than estimations using constant power budgets.
Wireless sensor networks are envisioned to be deployed in the absence of permanent network infrastructure and in environments with limited or no human accessibility. Hence, such deployments demand mechanisms to remotely (i.e., over the air) reconfigure and update the software on the nodes. In this paper we introduce DyTOS, a TinyOS based remote reprogramming approach that enables the dynamic exchange of software components and thus incrementally update the operating system and its applications. The core idea is to preserve the modularity of TinyOS, i.e., its componentisation, which is lost during the normal compilation process, and enable runtime composition of TinyOS components on the sensor node. The proposed solution integrates seamlessly into the system architecture of TinyOS: It does not require any changes to the programming model of TinyOS and all existing components can be reused transparently. Our evaluation shows that DyTOS incurs a low performance overhead while keeping a smaller-up to one third-memory footprint than other comparable solutions.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.