2014
DOI: 10.1145/2529992
|View full text |Cite
|
Sign up to set email alerts
|

Online learning of timeout policies for dynamic power management

Abstract: Dynamic power management (DPM) refers to strategies which selectively change the operational states of a device during runtime to reduce the power consumption based on the past usage pattern, the current workload, and the given performance constraint. The power management problem becomes more challenging when the workload exhibits nonstationary behavior which may degrade the performance of any single or static DPM policy.This article presents a reinforcement learning (RL)-based DPM technique for optimal select… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
21
0

Year Published

2016
2016
2022
2022

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 28 publications
(21 citation statements)
references
References 41 publications
0
21
0
Order By: Relevance
“…And DPM can be classified into timeout, predicative, stochastic, and machine-learning approaches. Timeout strategy 2,8 switches a device to a low-power state after it has been idle for a certain time period. Timeout strategy is easy to implement, but the drawback of this strategy is that it wastes power while waiting for the timeout to expire.…”
Section: Dpmmentioning
confidence: 99%
See 2 more Smart Citations
“…And DPM can be classified into timeout, predicative, stochastic, and machine-learning approaches. Timeout strategy 2,8 switches a device to a low-power state after it has been idle for a certain time period. Timeout strategy is easy to implement, but the drawback of this strategy is that it wastes power while waiting for the timeout to expire.…”
Section: Dpmmentioning
confidence: 99%
“…However, solving the stochastic optimization problem to find the optimal DPM strategy with linear programming is a complex process. Machine-learning strategies 3,8 apply machine learning to learn the request arrival patterns for DPM; they perform well under various workload conditions. But these strategies require offline data collection and training of a classifier that is a time-consuming and complex work.…”
Section: Dpmmentioning
confidence: 99%
See 1 more Smart Citation
“…Based on this a thermal aware scheduling approach is proposed to reduce the temperature of the system at run-time. Apart from these works, there are other studies to reduce the power consumption of a multicore system by scaling the hardware frequency dynamically [Dhiman and Rosing 2009;Javaid et al 2011;Ye and Xu 2014;Khan and Rinner 2014]. However, as shown in [Faruque et al 2010], these approaches cannot guarantee to minimize a system's thermal overhead effectively for all applications.…”
Section: Related Workmentioning
confidence: 99%
“…Algorithm 1 provides the pseudo-code of the hierarchical RTM, where affinity selection is performed using a greedy heuristic (lines 1-26) and frequency selection is performed using the Q-learning algorithm (lines [28][29][30][31][32][33][34][35], which is the simplified version of the learning algorithm used in our earlier work [3].…”
Section: B Hierarchical Rtm Algorithmmentioning
confidence: 99%