Proceedings of the 49th Annual Design Automation Conference 2012
DOI: 10.1145/2228360.2228482
|View full text |Cite
|
Sign up to set email alerts
|

Courteous cache sharing

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2013
2013
2019
2019

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 13 publications
(1 citation statement)
references
References 24 publications
0
1
0
Order By: Relevance
“…The fair miss rate of a thread is defined as the miss rate that the thread experiences when the shared cache is equally distributed among concurrent threads. The approach followed in [27] is based on changing the LRU policy to focus on fairness among cores by penalizing the core with highest IPC in favor of the others. The work in [28] evaluates several static partitioning approaches to achieve fairness on an Ivy Bridge architecture.…”
Section: Related Workmentioning
confidence: 99%
“…The fair miss rate of a thread is defined as the miss rate that the thread experiences when the shared cache is equally distributed among concurrent threads. The approach followed in [27] is based on changing the LRU policy to focus on fairness among cores by penalizing the core with highest IPC in favor of the others. The work in [28] evaluates several static partitioning approaches to achieve fairness on an Ivy Bridge architecture.…”
Section: Related Workmentioning
confidence: 99%