Proceedings of the 2nd Conference on Computing Frontiers 2005
DOI: 10.1145/1062261.1062321
|View full text |Cite
|
Sign up to set email alerts
|

Exploiting temporal locality in drowsy cache policies

Abstract: Technology projections indicate that static power will become a major concern in future generations of high-performance microprocessors. Caches represent a significant percentage of the overall microprocessor die area. Therefore, recent research has concentrated on the reduction of leakage current dissipated by caches. The variety of techniques to control current leakage can be classified as non-state preserving or state preserving. Non-state preserving techniques power off selected cache lines while state pre… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
44
0

Year Published

2007
2007
2018
2018

Publication Types

Select...
5
4

Relationship

1
8

Authors

Journals

citations
Cited by 41 publications
(44 citation statements)
references
References 17 publications
0
44
0
Order By: Relevance
“…Cache Decay relies on fine-grained logic counters, which are expensive, especially for large lower-level caches. Drowsy Caches [8,23] periodically move inactive lines to a low power mode in which they cannot be read or written. However, this scheme is less applicable in deep-nm technology nodes, where the difference between V dd and V t will be smaller.…”
Section: Related Workmentioning
confidence: 99%
“…Cache Decay relies on fine-grained logic counters, which are expensive, especially for large lower-level caches. Drowsy Caches [8,23] periodically move inactive lines to a low power mode in which they cannot be read or written. However, this scheme is less applicable in deep-nm technology nodes, where the difference between V dd and V t will be smaller.…”
Section: Related Workmentioning
confidence: 99%
“…In L1 caches, more than 90% of cache accesses hit the Most Recently Used (MRU) way [14]. Thus, to achieve good performance, in the HER cache the MRU block of each cache set is always stored in an SRAM bank.…”
Section: A High-performance Modementioning
confidence: 99%
“…The idea is to keep the data most recently used (MRU) by the processor always in the SRAM cell. Previous works [37] have shown that the MRU line in each cache set use to be accessed with a much higher probability than the remaining ones (for instance, 92.15% of the accesses in a 16KB-4way L1). Therefore, keeping the MRU data in the SRAM cell might provide energy benefits because SRAM reads are non-destructive.…”
Section: Macrocell Internalsmentioning
confidence: 99%
“…The techniques falling in the second group put the selected lines into a state-preserving low-power mode [14,37], reaching the same hit rate as a conventional cache. However, they reduce less leakage since the lines are not completely turned off.…”
Section: Leakage Reduction In Sram Cachesmentioning
confidence: 99%