2010 43rd Annual IEEE/ACM International Symposium on Microarchitecture 2010
DOI: 10.1109/micro.2010.52
|View full text |Cite
|
Sign up to set email alerts
|

Achieving Non-Inclusive Cache Performance with Inclusive Caches: Temporal Locality Aware (TLA) Cache Management Policies

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
70
0

Year Published

2011
2011
2018
2018

Publication Types

Select...
4
4

Relationship

0
8

Authors

Journals

citations
Cited by 101 publications
(72 citation statements)
references
References 14 publications
1
70
0
Order By: Relevance
“…Thus, when the shared cache evicts a block with non-empty tracking bits, it is required to send a recall message to each private cache that is caching the block, adding to system traffic. More insidiously, such recalls can increase the cache miss rate by forcing cores to evict hot blocks they are actively using [11]. To ensure scalability, we seek a system that make recalls vanishingly rare, the design of which first requires understanding the reasons why recalls occur.…”
Section: Concern #3: Maintaining Inclusionmentioning
confidence: 99%
“…Thus, when the shared cache evicts a block with non-empty tracking bits, it is required to send a recall message to each private cache that is caching the block, adding to system traffic. More insidiously, such recalls can increase the cache miss rate by forcing cores to evict hot blocks they are actively using [11]. To ensure scalability, we seek a system that make recalls vanishingly rare, the design of which first requires understanding the reasons why recalls occur.…”
Section: Concern #3: Maintaining Inclusionmentioning
confidence: 99%
“…Many other papers have also looked at exclusive [Barosso et al 2000] and noninclusive [Jaleel et al 2010] cache hierarchies; however, they focus on server-class and generalpurpose processors with considerably different workloads than this particular study.…”
Section: Memory Coherence and Consistencementioning
confidence: 99%
“…In an inclusive cache hierarchy, as shown in Figure 1(c), cache blocks stored in the L1 cache should also be stored in the L2 cache. When a block is evicted from the L2 cache, the corresponding block in the L1 cache (if present) has to be invalidated to maintain inclusion (referred to as back-invalidation [Jaleel et al 2010]). Thus, the capacity of the whole inclusive cache hierarchy equals to capacity of its LLC (the L2 cache in this example).…”
Section: Motivationmentioning
confidence: 99%
“…Previous work [Jaleel et al 2010] identified blocks that have high temporal locality in higher-level caches and reduced the frequency of back-invalidating them, which makes performance of inclusive cache hierarchies achieve that of noninclusive caches. However, blocks that have poor temporal locality in higher-level caches may still have temporal locality in the LLC, and the replacement of these blocks will still hurt the overall performance.…”
Section: Motivationmentioning
confidence: 99%
See 1 more Smart Citation