2014
DOI: 10.1109/mm.2014.42
|View full text |Cite
|
Sign up to set email alerts
|

Decoupled Compressed Cache: Exploiting Spatial Locality for Energy Optimization

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
96
0

Year Published

2014
2014
2019
2019

Publication Types

Select...
3
3
1

Relationship

1
6

Authors

Journals

citations
Cited by 37 publications
(97 citation statements)
references
References 8 publications
1
96
0
Order By: Relevance
“…There are also two well-known overheads of data compression: (1) compression/decompression overhead [2], [18] in terms of latency, energy, and area, and (2) complexity/cost to support variable data sizes [12], [20], [17], [22]. Both problems have solutions: e.g., Base-Delta-Immediate compression [18] provides a low-latency, low-energy hardware-based compression algorithm, and Decoupled Compressed Cache [20] provides a mechanism to manage data recompaction and fragmentation in compressed caches.…”
Section: Why Data Compression Can Be Energy-inefficientmentioning
confidence: 99%
See 1 more Smart Citation
“…There are also two well-known overheads of data compression: (1) compression/decompression overhead [2], [18] in terms of latency, energy, and area, and (2) complexity/cost to support variable data sizes [12], [20], [17], [22]. Both problems have solutions: e.g., Base-Delta-Immediate compression [18] provides a low-latency, low-energy hardware-based compression algorithm, and Decoupled Compressed Cache [20] provides a mechanism to manage data recompaction and fragmentation in compressed caches.…”
Section: Why Data Compression Can Be Energy-inefficientmentioning
confidence: 99%
“…One potential way of addressing multiple of the described constraints is to employ dedicated hardware-based data compression mechanisms (e.g., [27], [2], [7], [18], [4]) across various data links in the system. Compression exploits the high data redundancy observed in many modern applications [18], [20], [4], [26]. It can be used to improve both capacity (e.g., of caches, DRAM, non-volatile memories [27], [2], [7], [18], [4], [17], [22], [16], [26]) and bandwidth utilization (e.g., of on-chip and off-chip interconnects [8], [3], [24], [21], [17], [22], [26]).…”
Section: Introductionmentioning
confidence: 99%
“…Prior work has proposed different compression algorithms that tradeoff compression ratio (i.e., original size over compressed size) and decompression latency. We use the C-PACK+Z compression algorithm because it has been shown to have a good compression ratio with moderate decompression latency and hardware overheads [26] [31]. In general, SCC is largely independent of the compression algorithm in use.…”
Section: Compressed Cachingmentioning
confidence: 99%
“…The earliest compressed caches do not support variable compressed block sizes [25] [29] [16], allowing fast lookups with relatively low area overheads, but achieve lower compression effectiveness due to internal fragmentation. More recent designs [26] [1] [13] improve compression effectiveness using variable-size compressed blocks, but at the cost of extra metadata and indirection latency to locate a compressed block.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation