Complex scientific problems are solved by high-performance computing (HPC) data centres. To reduce their energy consumption, recently the concept of dynamically scaling the voltage and frequency (DVFS) of the processor has been adopted for HPC systems. In this paper, we address the energy-performance trade-off of those systems using the queuing theory methodology, to study the opportunity of speed scaling technique in HPC data centres. To this end, we consider a real-world environment of 8 servers as well as develop a simulator based on the generalised semi-Markov processes. After validating the correctness of the simulator against the configured real-world environment, the underlying simulator is used to derive optimal asynchronous speed scaling policies. We show that by setting the probabilities of switching the frequency of the processor at the arrival and departure epochs of the jobs (customers) to 0.65 and 0.7 respectively, an optimal point is achieved for the heterogeneous case with truncated Pareto distribution of the amount of work.
CCS CONCEPTS• Hardware → Enterprise level and data centers power issues; • Mathematics of computing → Queueing theory; • Computing methodologies → Discrete-event simulation.