2015
DOI: 10.1007/978-3-319-17473-0_6
|View full text |Cite
|
Sign up to set email alerts
|

Understanding Co-run Degradations on Integrated Heterogeneous Processors

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2017
2017
2021
2021

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(3 citation statements)
references
References 25 publications
0
3
0
Order By: Relevance
“…Interference within the system consists of memory interference on the shared memory system, and OS scheduling interference on the CPU. The scheduling interference occurs when the Linux scheduler selects an interfering task from the ready queue in favor of the GPU offloading task, which may inject large delays (jitter) into the response time of the offloaded kernel [28]. To protect the offloading point from such jitter, extra synchronizations are inserted around the offload in the OpenMP runtime.…”
Section: Enforcement Of Cpu Memory Inactivitymentioning
confidence: 99%
“…Interference within the system consists of memory interference on the shared memory system, and OS scheduling interference on the CPU. The scheduling interference occurs when the Linux scheduler selects an interfering task from the ready queue in favor of the GPU offloading task, which may inject large delays (jitter) into the response time of the offloaded kernel [28]. To protect the offloading point from such jitter, extra synchronizations are inserted around the offload in the OpenMP runtime.…”
Section: Enforcement Of Cpu Memory Inactivitymentioning
confidence: 99%
“…This section reports two studies that are similar to this work. The first study was done by Zhu et al [36,37] in which the authors studied co-scheduling on an integrated CPU-GPU system and considered a power cap. They devised a greedy algorithm that addressed memory contention from degradation in the execution time perspective while selecting frequency for power capping.…”
Section: Related Work 81 Memory Contention Studiesmentioning
confidence: 99%
“…Mekkat et al (2013) analyzed the management policy for the shared last level cache. Zhang et al (2015Zhang et al ( , 2017b studied the co-running behaviors of different devices for the same application, while Zhu et al (2014Zhu et al ( , 2017b) studied co-running performance degradation for different devices for separate applications. Garzón et al (2017) proposed an approach to optimize the energy efficiency of iterative computation on heterogeneous processors.…”
Section: Performance Analysis For Coupled Heterogeneous Processorsmentioning
confidence: 99%