2020
DOI: 10.1109/lca.2020.2992182
|View full text |Cite
|
Sign up to set email alerts
|

HiLITE: Hierarchical and Lightweight Imitation Learning for Power Management of Embedded SoCs

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
7
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
3
2

Relationship

1
4

Authors

Journals

citations
Cited by 12 publications
(7 citation statements)
references
References 9 publications
0
7
0
Order By: Relevance
“…Recent work also propose the use of power management systems to help reduce the power and energy consumption of the real-time systemsusing machine learning techniques [44], [36]. HetSched can work in conjunction with these schedulers to further reduce energy along with the efficient utilization of resources to meet real-time deadlines and power constraints.…”
Section: Related Workmentioning
confidence: 99%
“…Recent work also propose the use of power management systems to help reduce the power and energy consumption of the real-time systemsusing machine learning techniques [44], [36]. HetSched can work in conjunction with these schedulers to further reduce energy along with the efficient utilization of resources to meet real-time deadlines and power constraints.…”
Section: Related Workmentioning
confidence: 99%
“…However, utilization alone does not provide sufficient information about the characteristics of applications running on the system. To address this drawback, recent approaches have used performance counters to make DRM decisions [1,17,19,20]. The performance counters give fine-grained information about the system state, thus allowing DRM policies to make more intelligent decisions.…”
Section: Related Workmentioning
confidence: 99%
“…The performance counters give fine-grained information about the system state, thus allowing DRM policies to make more intelligent decisions. Machine learning approaches, such as decision trees [17], RL [2], and IL [10,12,20] have also been used to create DRM policies for mobile platforms. While these approaches are able to improve upon prior DRM methods, they still optimize for a single objective function, such as energy or execution time or PPW.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Since runtime task scheduling is a sequential decision-making problem, supervised learning methodologies, such as linear regression and regression tree, may not generalize for unseen states at runtime. Reinforcement learning (RL) and imitation learning (IL) are more effective for sequential decisionmaking problems [19,29,31]. Indeed, RL has shown promise when applied to the scheduling problem [20,21,37], but it suffers from slow convergence and sensitivity of the reward function [15,18].…”
Section: Introductionmentioning
confidence: 99%