SC16: International Conference for High Performance Computing, Networking, Storage and Analysis 2016
DOI: 10.1109/sc.2016.55
|View full text |Cite
|
Sign up to set email alerts
|

A Data Driven Scheduling Approach for Power Management on HPC Systems

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
14
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 26 publications
(14 citation statements)
references
References 13 publications
0
14
0
Order By: Relevance
“…The reason for training every 15 minutes is that the queued-job prediction model predicts the power waveform by selecting a job that is similar to the past submitted script. It is desirable to update the model at as short an interval as possible, because jobs executed at a close time to the current job often have similar features to those of the current jobs [10]. We evaluated the interval dependency of the relative error.…”
Section: Queued-job Prediction Modelmentioning
confidence: 99%
See 1 more Smart Citation
“…The reason for training every 15 minutes is that the queued-job prediction model predicts the power waveform by selecting a job that is similar to the past submitted script. It is desirable to update the model at as short an interval as possible, because jobs executed at a close time to the current job often have similar features to those of the current jobs [10]. We evaluated the interval dependency of the relative error.…”
Section: Queued-job Prediction Modelmentioning
confidence: 99%
“…Typical HPC system utilization is below 100%. Moreover, job power per node is strongly dependent on each user's application, so the instantaneous power utilization of the systems is below 50% of the maximum system power capacity [10]. As the power prediction accuracy becomes worse, the power reduction with predictive control degrades due to ensuring a margin.…”
Section: Introductionmentioning
confidence: 99%
“…Power-performance optimization methodologies typically aim at maximizing application performance while satisfying given constraints (power budget, energy consumption, application execution deadline, etc.). Most of them are mainly based on "power-shifting" among hardware components [9,13,19] or among applications/jobs [3,17,20,24,26]. Usually we have a large number of jobs running on a HPC system, and hence optimizing power-performance in both intraand inter-application cases is important.…”
Section: Power-performance Optimizationmentioning
confidence: 99%
“…Chasapis et al proposed a runtime optimization method which changes concurrency levels and socket assignment considering manufacturing variability of chips and the relationships among chips in NUMA nodes [4]. Wallace et al proposed "data-driven" job scheduling strategy [26] which observes the power profile of each job and use it at runtime to decide power budget distribution among jobs running on the system which has limited power budget.…”
Section: Power-performance Optimizationmentioning
confidence: 99%
“…Other works try to assign the application to the node where the data is mapped, or at least keep it as close as possible . More advanced solutions try to reduce gradually the job completion time by tuning the initial task allocation, adjusting data locality dynamically based on the status of the system and the network or to reduce the system power consumption, guiding the scheduling decisions . A detailed overview of data‐aware scheduling can be found in the work of Caíno‐Lores and Carretero .…”
Section: Introductionmentioning
confidence: 99%