HPCA - 16 2010 the Sixteenth International Symposium on High-Performance Computer Architecture 2010
DOI: 10.1109/hpca.2010.5416642
|View full text |Cite
|
Sign up to set email alerts
|

CHOP: Adaptive filter-based DRAM caching for CMP server platforms

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
123
0

Year Published

2011
2011
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 112 publications
(123 citation statements)
references
References 27 publications
0
123
0
Order By: Relevance
“…Prior approaches toward reducing metadata storage overhead have managed the DRAM cache using large cache lines on the order of kilobytes [1,4] or represented the presence of smaller sectors of a large cache block as a bit vector, eliminating the need to store full tag metadata [7,13]. These techniques still store metadata for all of DRAM, increasing bandwidth consumption and pollution of the DRAM cache, increasing false-sharing probability, and limiting scalability.…”
Section: Related Workmentioning
confidence: 99%
“…Prior approaches toward reducing metadata storage overhead have managed the DRAM cache using large cache lines on the order of kilobytes [1,4] or represented the presence of smaller sectors of a large cache block as a bit vector, eliminating the need to store full tag metadata [7,13]. These techniques still store metadata for all of DRAM, increasing bandwidth consumption and pollution of the DRAM cache, increasing false-sharing probability, and limiting scalability.…”
Section: Related Workmentioning
confidence: 99%
“…As a result, they are primarily optimized for low cache-memory bandwidth utilization through block-based organizations, 7,8 sector-based footprintpredicting organizations, 4,6 and addresscorrelated filter-based caching mechanisms. 11 Unfortunately, such organizations come with high tag and/or metadata overhead and high design complexity, making such cache designs impractical. For instance, state-of-the-art block-based and footprintpredicting caches require 4 Gbytes and 200 Mbytes of tags, respectively, for a capacity of 32 Gbytes.…”
Section: State-of-the-art Dram Cachesmentioning
confidence: 99%
“…This capacity constraint precludes the use of die-stacked DRAM as main memory. Hence, most proposals advocate employing die-stacked DRAM as a cache to filter out accesses to offchip main memory [10], [11], [20], [24].…”
Section: Background and Motivationmentioning
confidence: 99%
“…As a result, researchers have been exploring various ways to use the die-stacked DRAM as a giant last-level cache [10], [11], [19], [20], [24], [33]. There are a number of fundamental challenges that these past works have tried to address:…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation