Cooperative Networking 2011
DOI: 10.1002/9781119973584.ch13
|View full text |Cite
|
Sign up to set email alerts
|

Cooperative Caching for Chip Multiprocessors

Abstract: Chip multiprocessor (CMP) systems have made the on-chip caches a critical resource shared among co-scheduled threads. Limited off-chip bandwidth, increasing on-chip wire delay, destructive inter-thread interference, and diverse workload characteristics pose key design challenges. To address these challenge, we propose CMP cooperative caching (CC), a unified framework to efficiently organize and manage on-chip cache resources. By forming a globally managed, shared cache using cooperative private caches. CC can … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
32
0
1

Year Published

2011
2011
2019
2019

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 31 publications
(33 citation statements)
references
References 151 publications
0
32
0
1
Order By: Relevance
“…Our cache architecture builds on prior work to determine the optimal partitions for the LLC. As in state-of-theart schemes, we target a cache that is shared among multiprogrammed workloads [2,5,14,20,32]. Accesses are tracked by utility monitors [20] for computing each application's use of the cache.…”
Section: Usage Monitoring and Partitioningmentioning
confidence: 99%
See 1 more Smart Citation
“…Our cache architecture builds on prior work to determine the optimal partitions for the LLC. As in state-of-theart schemes, we target a cache that is shared among multiprogrammed workloads [2,5,14,20,32]. Accesses are tracked by utility monitors [20] for computing each application's use of the cache.…”
Section: Usage Monitoring and Partitioningmentioning
confidence: 99%
“…Xie et al [32] and Jaleel et al [13] modified the shared cache replacement policy to provide performance benefits compared to an unmanaged cache. A two-dimensional cache partitioning was proposed by Chang et al [5]. This allowed both space and time sharing within the cache, meaning that a few processors share a small cache region for particular time interval while the rest share the remaining large region.…”
Section: Related Workmentioning
confidence: 99%
“…Shared caches are preferable to their private alternatives especially when we consider (i) efficient utilization of cache space and (ii) avoiding data redundancy across caches. In particular, depending on their data access/sharing patterns, cache sharing among two processes/threads can be constructive or destructive [6], [8], [9]. Shared caches can cause co-runner applications running on different cores to contest for the available space.…”
Section: Multicore Architectures and Data Reusementioning
confidence: 99%
“…Interconnection mechanisms presented in [16] discussed the advantages and disadvantages of each mechanism. Cores in Hydra [17] are connected to the level 2 (L2) cache through a crossbar. Modularity results in enhanced control over electrical parameters and hence can result in higher performance or reduced power consumption.…”
Section: Related Workmentioning
confidence: 99%