2014
DOI: 10.1016/j.peva.2014.07.008
|View full text |Cite
|
Sign up to set email alerts
|

A distributed speed scaling and load balancing algorithm for energy efficient data centers

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
6
0

Year Published

2016
2016
2022
2022

Publication Types

Select...
5
1

Relationship

3
3

Authors

Journals

citations
Cited by 11 publications
(7 citation statements)
references
References 23 publications
0
6
0
Order By: Relevance
“…While the study was meaningful as they formally stated the problem and suggested a rough solution sketch, it has limitations that load balancing becomes arbitrary for servers with the same efficiency (i.e., identical servers). Ko and Cho [12] proposed a new load balancing and speed scaling framework that combined a distributed optimization algorithm with modern queueing theoretic analysis for taking into account the tail probability of response time. Despite the novelty in terms of methodological aspect, some technical requirements such as known a priori stationary workload processes restricted its practicality.…”
Section: B Real-time Operationmentioning
confidence: 99%
See 1 more Smart Citation
“…While the study was meaningful as they formally stated the problem and suggested a rough solution sketch, it has limitations that load balancing becomes arbitrary for servers with the same efficiency (i.e., identical servers). Ko and Cho [12] proposed a new load balancing and speed scaling framework that combined a distributed optimization algorithm with modern queueing theoretic analysis for taking into account the tail probability of response time. Despite the novelty in terms of methodological aspect, some technical requirements such as known a priori stationary workload processes restricted its practicality.…”
Section: B Real-time Operationmentioning
confidence: 99%
“…As such, researchers in the various fields have been developing novel technologies for pursuing more efficiency in CPU usage, for example, dynamic voltage/frequency scaling (DVFS), wakeon-LAN (WoL), and virtualization. FIGURE 1: Breakdown of power consumption in a server [10] Alongside various energy-saving technologies, recent studies focused on achieving two contradicting objectives: energy efficiency and quality-of-service (QoS) (see [5], [9], [12]- [16] and references therein). Most of them require solving complicated optimization problems, and the solution may not be easy to interpret and implement.…”
Section: Introductionmentioning
confidence: 99%
“…Binding the QoS-related constraints implies that the metrics are maintained as a constant value, and suggests the need to investigate the stabilization of response times. Although some proposed methodologies [2,3,4] assume the stationarity of data traffic arrival processes, nonstationary properties, such as time-varying arrival rates from real data [5], make it difficult to analyze queueing system performance.…”
Section: Introductionmentioning
confidence: 99%
“…Modern data centers consume tremendous amounts of energy to supply networking, computing, and storage services to global IT companies. Concerns about energy consumption have prompted researchers to explore operational methods that maximize energy efficiency and satisfy a certain level of quality of service (QoS), [2,3,4]. QoS can be achieved by adding constraints that impose upper bounds for response time-related metrics, e.g., the mean virtual response time and the tail probability of the response time.…”
Section: Introductionmentioning
confidence: 99%
“…While their algorithms can achieve service level agreement (SLA), considering only mean response time is not sufficient in real situations . Ko and Cho propose a distributed algorithm considering the tail probability of the response time from G/G/1/PS queues. Kwon and Gautam suggest time‐stabilizing approaches for the queue length of a data center considering time‐varying arrival rates by formulating the problem as an MIP.…”
Section: Introductionmentioning
confidence: 99%