2016
DOI: 10.1145/2834120
|View full text |Cite
|
Sign up to set email alerts
|

Adaptive and Hierarchical Runtime Manager for Energy-Aware Thermal Management of Embedded Systems

Abstract: Modern embedded systems execute applications, which interacts with the operating system and hardware differently depending on type of workload. These cross-layer interactions result in wide variations of chipwide thermal profile. In this paper, a reinforcement learning-based run-time manager is proposed that guarantees application-specific performance requirements and controls the POSIX thread allocation and voltage/frequency scaling for energy-efficient thermal management. This controls three thermal aspectsp… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
26
0

Year Published

2016
2016
2023
2023

Publication Types

Select...
6
3

Relationship

3
6

Authors

Journals

citations
Cited by 44 publications
(26 citation statements)
references
References 46 publications
(76 reference statements)
0
26
0
Order By: Relevance
“…Statically optimized resource and power management are not likely to achieve the best performance when the input characteristics are changing. As a result reinforcement learning has been used for DPM [22][23][24][25][26], DVFS [18-21,52], or combination of DPM, DVFS and mapping [28,53,54] in embedded, desktop and datacenter domains. A detailed classification of existing RL based approaches for power/energy management is given Table 2.…”
Section: Reinforcement Learning For Run-time Managementmentioning
confidence: 99%
See 1 more Smart Citation
“…Statically optimized resource and power management are not likely to achieve the best performance when the input characteristics are changing. As a result reinforcement learning has been used for DPM [22][23][24][25][26], DVFS [18-21,52], or combination of DPM, DVFS and mapping [28,53,54] in embedded, desktop and datacenter domains. A detailed classification of existing RL based approaches for power/energy management is given Table 2.…”
Section: Reinforcement Learning For Run-time Managementmentioning
confidence: 99%
“…Several principles have been followed for shutting the cores down, for example, greedy approach where a core enters into sleep mode as soon as processing on the core is finished and timeout approach that enters the core into sleep mode after certain time of idleness if no request is received within that time. Out of mapping, DVFS and DPM, they have been applied individually and in combinations as well, e.g., mapping in [15,16] and both mapping and DVFS in [27,28].…”
Section: Introductionmentioning
confidence: 99%
“…For instance, reducing may cause an increase in soft error rate (SER) [6]. Conversely, increasing causes an increase of temperature, accelerates aging and the probability of breakdowns [7]. This leads to min ≤ ≤ max ( 1 ) An execution may require more than a certain level of throughput to be meaningful [8].…”
Section: Introductionmentioning
confidence: 99%
“…The framework is based upon the Reinforcement Learning (RL) approach described in [11], [12]. The framework invokes workload prediction and appropriate V-F control to achieve energy minimisation for applications executed on a multi-core hardware platform.…”
Section: Introductionmentioning
confidence: 99%