Proceedings of the 33rd Annual ACM/IEEE International Symposium on Microarchitecture 2000
DOI: 10.1145/360128.360150
|View full text |Cite
|
Sign up to set email alerts
|

Dynamic zero compression for cache energy reduction

Abstract: Dynamic Zero Compression reduces the energy required for cache accesses by only writing and reading a single bit for every zero-valued byte. This energy-conscious compression is invisible to software and is handled with additional circuitry embedded inside the cache RAM arrays and the CPU. The additional circuitry imposes a cache area overhead of 9% and a read latency overhead of around two FO4 gate delays. Simulation results show that we can reduce total data cache energy by around 26% and instruction cache e… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
56
0

Year Published

2003
2003
2014
2014

Publication Types

Select...
6
1

Relationship

1
6

Authors

Journals

citations
Cited by 134 publications
(63 citation statements)
references
References 11 publications
(12 reference statements)
0
56
0
Order By: Relevance
“…This cache model, initally presented by Ghose and Kamble [9], is based in a subbanking scheme and has the advantage that each word (within a cache line) can be read independently without the need to read the whole cache line. The same model was used by Villa et al [22] in their dynamic zero compression scheme. The results of the PA-DFVC, in terms of Energy-Delay Product and power reduction, are presented in Section 5.2.…”
Section: Fig 4 Hp-dfvc: Detailed Designmentioning
confidence: 99%
See 3 more Smart Citations
“…This cache model, initally presented by Ghose and Kamble [9], is based in a subbanking scheme and has the advantage that each word (within a cache line) can be read independently without the need to read the whole cache line. The same model was used by Villa et al [22] in their dynamic zero compression scheme. The results of the PA-DFVC, in terms of Energy-Delay Product and power reduction, are presented in Section 5.2.…”
Section: Fig 4 Hp-dfvc: Detailed Designmentioning
confidence: 99%
“…Cache/Memory compression has been proposed for better utilization of the available transistor budgets [1,13,22]. The idea behind this approach is to store cache lines in a compressed form so a greater number of cache lines can reside in the cache at any given time, lowering the miss rate.…”
Section: Design Issues Of High-performance Dfvc (Hp-dfvc)mentioning
confidence: 99%
See 2 more Smart Citations
“…In addition, dynamic data compression techniques ( [14], [15]) have been proposed as a way to reduce energy consumption in processor units. They are orthogonal to the design proposed here.…”
Section: Related Workmentioning
confidence: 99%