Proceedings of the 42nd Annual International Symposium on Computer Architecture 2015
DOI: 10.1145/2749469.2750398
|View full text |Cite
|
Sign up to set email alerts
|

Slip

Abstract: Wire energy has become the major contributor to energy in large lower level caches. While wire energy is related to wire latency its costs are exposed differently in the memory hierarchy. We propose Sub-Level Insertion Policy (SLIP), a cache management policy which improves cache energy consumption by increasing the number of accesses from energy efficient locations while simultaneously decreasing intra-level data movement. In SLIP, each cache level is partitioned into several cache sublevels of differing size… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
3
0

Year Published

2017
2017
2023
2023

Publication Types

Select...
5
1
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 17 publications
(4 citation statements)
references
References 36 publications
(61 reference statements)
0
3
0
Order By: Relevance
“…Huh et al [50] add a configurable degree of replication and a directory to manage coherence among the replicas within the LLC. ASR [13] controls the amount of replication of shared read-only blocks with a probabilistic cost-benefit measure, and SLIP [35] introduces block reuse counters to avoid power-inefficient migrations and replications of blocks that are not reused.…”
Section: A Hardware-managed Nuca Cachesmentioning
confidence: 99%
See 1 more Smart Citation
“…Huh et al [50] add a configurable degree of replication and a directory to manage coherence among the replicas within the LLC. ASR [13] controls the amount of replication of shared read-only blocks with a probabilistic cost-benefit measure, and SLIP [35] introduces block reuse counters to avoid power-inefficient migrations and replications of blocks that are not reused.…”
Section: A Hardware-managed Nuca Cachesmentioning
confidence: 99%
“…To do so, D-NUCA designs decide the best allocation strategy for each cache block based on its access pattern and apply well-known strategies such as allocating private cache blocks close to the accessing core, replicating shared read-only cache blocks in multiple banks, and bypassing the LLC for non-reused cache blocks. Although many proposals perform these actions at the microarchitectural level [13], [14], [31], [32], [35], [45], [56], [98], state-of-the-art techniques such as Reactive NUCA (R-NUCA) [48], [49] rely on Operating System (OS) support to identify the data access patterns [34], [48], [49], [59], [79], [80], which has important drawbacks [41]- [43]. OSbased approaches identify shared and private data, but they are unable to identify temporarily private and non-reused data.…”
Section: Introductionmentioning
confidence: 99%
“…In general, the main body of the cache bypassing techniques is to determine which block should be bypassed or not. For example, a reuse distance can be a key measure to determine whether to bypass or not [4,5,6,7]. Some other works consider reuse counts for making a decision in cache bypassing [8,9,10,11].…”
Section: Related Workmentioning
confidence: 99%
“…Das, Aamodt, and Dally [49] introduce Sub-Level Insertion Policies (SLIP). They make the observation that D-NUCA techniques such as NUCA-Migration and NUCA-Replication introduce significant energy costs as migrating and replicating lines within the cache has an energy cost which may be wasteful if the cache line is not reused, or reused very little.…”
Section: Introductionmentioning
confidence: 99%