2018 IEEE International Conference on Cluster Computing (CLUSTER) 2018
DOI: 10.1109/cluster.2018.00052
|View full text |Cite
|
Sign up to set email alerts
|

Co-Scheduling HPC Workloads on Cache-Partitioned CMP Platforms

Abstract: With the recent advent of many-core architectures such as chip multiprocessors (CMP), the number of processing units accessing a global shared memory is constantly increasing. Co-scheduling techniques are used to improve application throughput on such architectures, but sharing resources often generates critical interferences. In this paper, we focus on the interferences in the last level of cache (LLC) and use the Cache Allocation Technology (CAT) recently provided by Intel to partition the LLC and give each … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
6
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
3
3
1

Relationship

0
7

Authors

Journals

citations
Cited by 10 publications
(6 citation statements)
references
References 25 publications
0
6
0
Order By: Relevance
“…J. Breitbart et al created a resource monitoring tool useful for co-scheduling HPC applications [16] and provided a memory-intensity-aware co-scheduling policy [17]. Since the industry started supporting several QoS control features, some researchers combined the above concepts with cache partitioning [18], [19], bandwidth partitioning [20], [21] or the combination of them [22], [23]. Q. Zhu et al rather targeted CPU-GPU heterogeneous processors and proposed a co-scheduling approach suitable for them [24].…”
Section: Related Workmentioning
confidence: 99%
“…J. Breitbart et al created a resource monitoring tool useful for co-scheduling HPC applications [16] and provided a memory-intensity-aware co-scheduling policy [17]. Since the industry started supporting several QoS control features, some researchers combined the above concepts with cache partitioning [18], [19], bandwidth partitioning [20], [21] or the combination of them [22], [23]. Q. Zhu et al rather targeted CPU-GPU heterogeneous processors and proposed a co-scheduling approach suitable for them [24].…”
Section: Related Workmentioning
confidence: 99%
“…The latter seeks to partition a graph into even-sized components while minimizing the number of edges. Figure 1 summarizes other problems that can be modeled as Balanced Partition, including: People Assignment [34], Routing [31], [35]- [38], Task Allocation [39], [40], File Placement [41], [42] and Scheduling [43]- [45]. We briefly describe them in the following lines.…”
Section: A Balanced Partition (Bp)mentioning
confidence: 99%
“…Here, the goal is to improve the parallel execution efficiency of tasks that share resources (e.g., external devices, shared memory, and files). To solve this problem, there exist multiple paradigms, such as a Partitioned Schedule [43], [45], a Global Schedule [46], and a Semi-partitioned Schedule [44].…”
Section: A Balanced Partition (Bp)mentioning
confidence: 99%
“…Some authors in their publications approach it as an offline scheduling problem, where all task data are available at the start and an optimal schedule can be constructed in advance. Among these publications are [14,20,21], where the authors solve the offline scheduling problem with resource constraint. They model the CPU cache partition size as a controllable task resource and define task speedup as a function of the cache partition size.…”
Section: Related Workmentioning
confidence: 99%