Proceedings of the 15th International Conference on Parallel Architectures and Compilation Techniques 2006
DOI: 10.1145/1152154.1152161
|View full text |Cite
|
Sign up to set email alerts
|

Communist, utilitarian, and capitalist cache policies on CMPs

Abstract: As chip multiprocessors (CMPs) become increasingly mainstream, architects have likewise become more interested in how best to share a cache hierarchy among multiple simultaneous threads of execution. The complexity of this problem is exacerbated as the number of simultaneous threads grows from two or four to the tens or hundreds. However, there is no consensus in the architectural community on what "best" means in this context. Some papers in the literature seek to equalize each thread's performance loss due t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
108
0

Year Published

2007
2007
2015
2015

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 154 publications
(110 citation statements)
references
References 18 publications
0
108
0
Order By: Relevance
“…We are aware that there are several possible definitions of fair use of shared resources [16]. The particular choice of fairness measure should not affect the main purpose of our work.…”
Section: B Resource Management Objectivesmentioning
confidence: 99%
“…We are aware that there are several possible definitions of fair use of shared resources [16]. The particular choice of fairness measure should not affect the main purpose of our work.…”
Section: B Resource Management Objectivesmentioning
confidence: 99%
“…Rafique et al [125] and Petoumenos et al [121] proposed spatially fine-grained partitioning support, which can be used by various partitioning policies (such as miss rate reduction, fair caching and QoS provision). Hsu et al [66] studied various partitioning metrics and policies. Their study focused on three caching paradigms (communist caching for fairness, utilitarian caching for overall throughput and uncontrolled capitalist caching), and recognized the difficulties to improve both overall throughput and fairness using a single partitioning scheme.…”
Section: Cmp Cache Partitioningmentioning
confidence: 99%
“…Measurement information can be gathered via profiling [66,84], LRU stack hit position counting [156], monitoring [143], or dynamic set sampling [123]. Table 5.1 compares the optimization goals and policies used by prior schemes.…”
Section: Cache Partitioning Backgroundmentioning
confidence: 99%
“…System software also does not have control over hardware resources such as caches and memory bandwidth, which renders responding to contention and managing applications' QoS quite challenging. As a result, despite the amount of research attention given to contention problems on multicore platforms [28,45,10,19,44,51,52,27,32,15,29,24,63,23,24,7,4,60], mitigating the impact of contention on an application's performance and QoS, enforcing the relative QoS priorities of co-running applications, while maximizing machine utilization, remains key challenges in modern warehouse scale computers.…”
Section: Mitigating Contentionmentioning
confidence: 99%
“…Hardware techniques such as cache partitioning and bandwidth partitioning to reduce resource contention and improve performance and fairness on multicores have received much research attention [28,45,10,19,44,51,52,27,32]. In addition, there also have been a number of work aimed at better modeling cache contention [8] and monitoring cache contention [62].…”
Section: Novel Hardware Solutions To Mitigate Contentionmentioning
confidence: 99%