2017 IEEE International Conference on Computer Design (ICCD) 2017
DOI: 10.1109/iccd.2017.28
|View full text |Cite
|
Sign up to set email alerts
|

Machine Learning-Based Approaches for Energy-Efficiency Prediction and Scheduling in Composite Cores Architectures

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
14
0
1

Year Published

2019
2019
2024
2024

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 40 publications
(15 citation statements)
references
References 19 publications
0
14
0
1
Order By: Relevance
“…In contrast to the previous approaches, recent work have proposed models that can predict what core configuration will best meet an application's requirements [32,40]. [1] proposed an offline approach to optimize energy and performance using data mining to determine an optimal scheduling of applications to cores.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…In contrast to the previous approaches, recent work have proposed models that can predict what core configuration will best meet an application's requirements [32,40]. [1] proposed an offline approach to optimize energy and performance using data mining to determine an optimal scheduling of applications to cores.…”
Section: Related Workmentioning
confidence: 99%
“…[1] proposed an offline approach to optimize energy and performance using data mining to determine an optimal scheduling of applications to cores. [32] used several machine learning algorithms to determine an energy-efficient scheduling of applications to cores, where the work focused on different core architectures in contrast to our work which focuses on different cache configurations. [42] used a metric known as Performance Impact Estimation (PIE) to predict the performance of different workloads when executed on different cores for making decisions on whether to migrate a specific workload to a different core.…”
Section: Related Workmentioning
confidence: 99%
“…In [27], the authors apply machine learning to find out energy-efficient con- Table 1, and thus need to implement some non-trivial module to collect such data at runtime. On the other hand, these studies alleviate the performance prediction problem of mappings by either focusing on task/thread executions on some specific resources such as in [21,25,26,24] without considering the communication aspects, or focusing on the prediction of threads and/or cores numbers or core configurations such as in [23,22,27] without investigating the explicit thread/task-core binding solutions. No microarchitecture-dependent information is required in our approach contrarily to approaches such as [29] or [30].…”
Section: Related Workmentioning
confidence: 99%
“…manager to dynamically tailor the hardware configuration to the changing program execution characteristics. It avoids the pitfalls of a static optimization strategy where the hardware configuration remains unchanged for the dynamically evolving program phases [23], [24], [25], [26].…”
Section: Introductionmentioning
confidence: 99%
“…Unlike prior machine-learning-based approaches [23], [24], [25], [26], [32], where the learned model remains static after deployment, we use RL to continually refine and update the decision policy throughout runtime execution. As the RL system learns and adjusts its decisions over time, it gains a better understanding of what works for the running program and becomes more efficient in recommending hardware power configurations for the target programs and underlying hardware.…”
Section: Introductionmentioning
confidence: 99%