2015
DOI: 10.1016/j.micpro.2015.05.009
|View full text |Cite
|
Sign up to set email alerts
|

Optimal processor dynamic-energy reduction for parallel workloads on heterogeneous multi-core architectures

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2017
2017
2022
2022

Publication Types

Select...
9
1

Relationship

0
10

Authors

Journals

citations
Cited by 15 publications
(9 citation statements)
references
References 23 publications
(40 reference statements)
0
9
0
Order By: Relevance
“…Moreover, we do not consider or require different frequencies for each core in a cluster, i.e., all the cores in a cluster run at the same frequency. Inspired by other works [33][34][35], we devised the following model that can be used to estimate the performance of a given parallel application running on a two-cluster HMP.…”
Section: Application Performance Modellingmentioning
confidence: 99%
“…Moreover, we do not consider or require different frequencies for each core in a cluster, i.e., all the cores in a cluster run at the same frequency. Inspired by other works [33][34][35], we devised the following model that can be used to estimate the performance of a given parallel application running on a two-cluster HMP.…”
Section: Application Performance Modellingmentioning
confidence: 99%
“…A variable-aware DVFS method was proposed in [9], in which the status of the processor was changed according to the variables of voltage, temperature, process parameter and so on, rather than using frequency threshold and greedy policy. In [10] the models were made for both homogeneous and heterogenous processors, the authors tried to reduce the dynamic power of the processor, and compared the dynamic energy consumption for processing tasks on the two kinds of processors. To find the optimal frequency of each core when processing tasks, decision tree was adopted in [11] to minimize the energy consumption of each user instruction.…”
Section: Related Workmentioning
confidence: 99%
“…The effect caused by storage mode to algorithm time performance specifically manifests as the latency waiting for the completion of memory access. To balance the cache latency and hit rate, modern CPU usually has a multi-level Cache structure to reduce the average memory access time [15,19]. Before the CPU access memory, multi-level Cache is sequentially inquired until it hits, or memory is accessed when it misses.…”
Section: Storage Optimizationmentioning
confidence: 99%