2012 39th Annual International Symposium on Computer Architecture (ISCA) 2012
DOI: 10.1109/isca.2012.6237047
|View full text |Cite
|
Sign up to set email alerts
|

The dynamic granularity memory system

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
58
0

Year Published

2012
2012
2017
2017

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 45 publications
(58 citation statements)
references
References 24 publications
0
58
0
Order By: Relevance
“…Leveraging the observation that applications access only a few words within a cache block, researchers have proposed re-engineering the processor/memory interface to allow for activating only the DRAM chips at which the requested words are stored [1,16,58,59]. While potentially effective in the server context as well, these proposals require disruptive changes to commodity memory technology.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Leveraging the observation that applications access only a few words within a cache block, researchers have proposed re-engineering the processor/memory interface to allow for activating only the DRAM chips at which the requested words are stored [1,16,58,59]. While potentially effective in the server context as well, these proposals require disruptive changes to commodity memory technology.…”
Section: Related Workmentioning
confidence: 99%
“…Instruction-based predictors have been extensively used in the context of data prefetching [25,44], cache-coherence action prediction [19], NOC power reduction [21], last-write prediction [50], on-chip granularity prediction [26], and offchip bandwidth reduction [17,58]. However, none of these works has targeted improving row buffer locality by predicting the memory page access density.…”
Section: Related Workmentioning
confidence: 99%
“…Repetitive calls to these functions result in repetitive data access patterns (i.e., page footprints) that can be exploited to predict future data accesses upon subsequent calls to the same function. The correlation between code and data access patterns has been heavily exploited for data prefetching [2], [15], [27] and filtering of unused data [10], [14], [16], [32].…”
Section: A Unison Cachementioning
confidence: 99%
“…Fortunately, coherence snoops are not common in many applications (e.g., 1/100 cache operations in SpecJBB) as a coherence directory and an inclusive LLC filter them out. 1 We do not have access to an industry-grade 32nm library, so we synthesized at a higher 180nm node size and scaled the results to 32 nm (latency and energy scaled proportional to Vdd (taken from [36]) and V dd 2 respectively). Large caches with many words per set (≡ highly associative conventional cache) need careful consideration.…”
Section: Tag-only Operationsmentioning
confidence: 99%
“…[7]) to restructure data for improved spatial efficiency. There have also been efforts from the architecture community to predict spatial locality [29,34,17,36], which we can leverage to predict Amoeba-Block ranges. Finally, cache compression is an orthogonal body of work that does not eliminate unused words but seeks to minimize the overall memory footprint [1].…”
Section: Related Workmentioning
confidence: 99%