2013
DOI: 10.1002/cpe.3123
|View full text |Cite
|
Sign up to set email alerts
|

A fine‐grained thread‐aware management policy for shared caches

Abstract: SUMMARYTwo of the main sources of inefficiency in current caches are the non‐uniform distribution of the memory accesses across the cache sets, which causes misses due to the mapping restrictions of non fully associative caches and the access patterns with little locality that degrade the performance of caches under the traditional least recently used. replacement policy. This paper proposes a technique to tackle in a coordinated way both kinds of problems in the context of chip multiprocessors, whose last lev… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
4
0

Year Published

2013
2013
2022
2022

Publication Types

Select...
2
1

Relationship

1
2

Authors

Journals

citations
Cited by 3 publications
(4 citation statements)
references
References 27 publications
(61 reference statements)
0
4
0
Order By: Relevance
“…The constraints are the same with those in the two-stage model, that is, constraints (12), (13), (14), (15) and (16).…”
Section: Comparison With One-stage Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…The constraints are the same with those in the two-stage model, that is, constraints (12), (13), (14), (15) and (16).…”
Section: Comparison With One-stage Methodsmentioning
confidence: 99%
“…Promotion policy determines how to update the hit cache line in the replacement priority order. The thread-aware Dynamic Insertion Policy (TADIP) [15][16][17] dynamically chooses the insertion policies of the traditional LRU policy and the Bimodal Insertion Policy, according to the cache requirements of different applications.…”
Section: Introductionmentioning
confidence: 99%
“…Two of the main sources of inefficiency in current caches are the nonuniform distribution of the memory accesses across the cache sets, which causes misses because of the mapping restrictions of non fully associative caches and the access patterns with little locality that degrade the performance of caches under the traditional least recently used replacement policy. This special issue finishes with a paper by Rolán et al [3] that proposes a technique to tackle in a coordinated way both kinds of problems in the context of chip multiprocessors, whose last-level caches can be shared by threads with different patterns of locality. This proposal, called thread-aware mapping and replacement miss reduction (TAMR2) policy, tracks the behavior of each thread in each set in order to decide the appropriate combination of policies to deal with these problems.…”
mentioning
confidence: 99%
“…This special issue finishes with a paper by Rolán et al . that proposes a technique to tackle in a coordinated way both kinds of problems in the context of chip multiprocessors, whose last‐level caches can be shared by threads with different patterns of locality. This proposal, called thread‐aware mapping and replacement miss reduction (TAMR2) policy, tracks the behavior of each thread in each set in order to decide the appropriate combination of policies to deal with these problems.…”
mentioning
confidence: 99%