Proceedings of the 46th Annual IEEE/ACM International Symposium on Microarchitecture 2013
DOI: 10.1145/2540708.2540734
|View full text |Cite
|
Sign up to set email alerts
|

Imbalanced cache partitioning for balanced data-parallel programs

Abstract: This paper investigates partitioning the ways of a shared last-level cache among the threads of a symmetric dataparallel application running on a chip-multiprocessor. Unlike prior work on way-partitioning for unrelated threads in a multiprogramming workload, the domain of multithreaded programs requires both throughput and fairness. Additionally, our workloads show no obvious thread differences to exploit: program threads see nearly identical IPC and data reuse as they progress (as expected for a well-written … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
12
0

Year Published

2014
2014
2018
2018

Publication Types

Select...
6
1

Relationship

2
5

Authors

Journals

citations
Cited by 14 publications
(12 citation statements)
references
References 50 publications
0
12
0
Order By: Relevance
“…For instance, partitioning has been used to improve fairness [37,40], implement priorities and guarantee QoS [12,20,26], improve NUCA designs [3,33], and eliminate side-channel attacks [39]. Partitioning is therefore a general tool to help achieve system-level objectives.…”
Section: Applications Of Cache Modelsmentioning
confidence: 99%
See 2 more Smart Citations
“…For instance, partitioning has been used to improve fairness [37,40], implement priorities and guarantee QoS [12,20,26], improve NUCA designs [3,33], and eliminate side-channel attacks [39]. Partitioning is therefore a general tool to help achieve system-level objectives.…”
Section: Applications Of Cache Modelsmentioning
confidence: 99%
“…Recall from Sec. 2.4 that partitioning has many benefits: improving shared cache performance [41], quality of service [26], fairness [40], security [39], etc.…”
Section: Case Study: Cache Partitioningmentioning
confidence: 99%
See 1 more Smart Citation
“…More recently, several efforts have investigated how cache partitioning can be applied to improve the overall throughput of multithreaded programs [17][18][19]. However, these techniques cannot directly address the impact of variations as they fail to take into account the performance imbalance in thread performance caused due to within-die variations.…”
Section: Related Workmentioning
confidence: 99%
“…Proposals for replacement policy modification have concentrated on tuning the replacement policy for better prediction of future reuse of cached data, with separate parameter tuning for each application [13,19,20,21,31]. Researchers have also proposed dynamic partitioning policies for multithreaded programs with static thread assignments and mapping, with an aim to ensure balanced progress for all threads while optimizing throughput [26,27].…”
Section: Introductionmentioning
confidence: 99%