Proceedings of the 17th International Conference on Parallel Architectures and Compilation Techniques 2008
DOI: 10.1145/1454115.1454145
|View full text |Cite
|
Sign up to set email alerts
|

Adaptive insertion policies for managing shared caches

Abstract: Chip Multiprocessors (CMPs) allow different applications to concurrently execute on a single chip. When applications with differing demands for memory compete for a shared cache, the conventional LRU replacement policy can significantly degrade cache performance when the aggregate working set size is greater than the shared cache. In such cases, shared cache performance can be significantly improved by preserving the entire working set of applications that can co-exist in the cache and preserving some portion … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
173
0

Year Published

2009
2009
2015
2015

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 255 publications
(173 citation statements)
references
References 18 publications
0
173
0
Order By: Relevance
“…Recent work has noted the poor scalability of having separate monitors for each core and methods have been proposed including In-Cache Estimation Monitors [11] and set-dueling [12] to eliminate the need for separate monitors. A number of sets in the cache are dedicated to a particular core from which the monitored statistics can be gathered.…”
Section: Related Workmentioning
confidence: 99%
“…Recent work has noted the poor scalability of having separate monitors for each core and methods have been proposed including In-Cache Estimation Monitors [11] and set-dueling [12] to eliminate the need for separate monitors. A number of sets in the cache are dedicated to a particular core from which the monitored statistics can be gathered.…”
Section: Related Workmentioning
confidence: 99%
“…Recent work has shown that level two cache miss rates can be improved by preventing or limiting the amount of data retained for applications with streaming data accesses or working sets larger than the level two cache capacity [16]. For shared caches, different data retention policies can be applied to different processes in order to maximize the use of the shared capacity [17]. Other approaches like cache decay can target the same types of data in order to reduce the used capacity in a cache by turning off lines that have remained idle for long periods of time [18].…”
Section: A Cache Design and Managementmentioning
confidence: 99%
“…As the different types of data exhibit different characteristics with respect to idle frequency and idle duration, it may also be beneficial to tailor these techniques to specific types of data as suggested as future work in [17]. For example, data in ScalParC that is shared by many threads goes idle frequently and for long periods of time, so it might be worthwhile to prevent it from being cached.…”
Section: Data Reusementioning
confidence: 99%
“…Shared caches in contemporary multicores have repeatedly been shown to be critical resources for performance [15,23,28,8,17]. A significant amount of research has investigated the impact of cache sharing on application performance [23,30,12,11].…”
Section: Introductionmentioning
confidence: 99%