Proceedings of the ACM SIGMETRICS Joint International Conference on Measurement and Modeling of Computer Systems 2011
DOI: 10.1145/1993744.1993749
|View full text |Cite
|
Sign up to set email alerts
|

Studying the impact of hardware prefetching and bandwidth partitioning in chip-multiprocessors

Abstract: Modern high performance microprocessors widely employ hardware prefetching technique to hide long memory access latency. While very useful, hardware prefetching tends to aggravate the bandwidth wall, a problem where system performance is increasingly limited by the availability of the off-chip pin bandwidth in Chip Multi-Processors (CMPs).In this paper, we propose an analytical model-based study to investigate how hardware prefetching and memory bandwidth partitioning impact CMP system performance and how they… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
10
0

Year Published

2013
2013
2020
2020

Publication Types

Select...
4
2
2

Relationship

0
8

Authors

Journals

citations
Cited by 31 publications
(10 citation statements)
references
References 38 publications
0
10
0
Order By: Relevance
“…A major limitation of cache bypassing technique is, it could prove to be partially benefited as it already pollutes the cache, by replacing data once. Liu et al (2008), and Liu and Solihin (2011) aim to predict deadblocks based on bursts of accesses to a cache block rather than individual accesses. The work investigates references made to a cache line when it moves from MRU position to LRU position to predict deadblocks.…”
Section: Deadblock Predictions To Reduce Cache Contentionmentioning
confidence: 99%
See 1 more Smart Citation
“…A major limitation of cache bypassing technique is, it could prove to be partially benefited as it already pollutes the cache, by replacing data once. Liu et al (2008), and Liu and Solihin (2011) aim to predict deadblocks based on bursts of accesses to a cache block rather than individual accesses. The work investigates references made to a cache line when it moves from MRU position to LRU position to predict deadblocks.…”
Section: Deadblock Predictions To Reduce Cache Contentionmentioning
confidence: 99%
“…• Prefetching and bandwidth partitioning: According to Moore's law, the quantity of transistors is doubling rapidly as compared to bandwidth which hardly grows 10 to 15%, it becomes challenging for bandwidth to handle the pressure due to increasing cores and leads to the âŁ˜bandwidth wall' crises. Exploiting the combined effect of prefetching and bandwidth partitioning Liu and Solihin (2011) propose a scheme that investigates prefetching conditions and describes a bandwidth partitioning policy coupled with the effects due to prefetching.…”
Section: Studying Impact Of Prefetching With Other Shared Resourcesmentioning
confidence: 99%
“…Regarding the NoC, some interesting approaches [21,22] implement virtual channels and dynamically adjust the priority between regular and prefetch requests coming from multiple cores. With respect to memory controller policies, recent proposals [23,24,25] have also focused on multicores. These policies take into account the prefetcher performance to dynamically select the priority of both regular and prefetch requests.…”
Section: Related Workmentioning
confidence: 99%
“…characterize cache pollution in the real system and propose a prefetch manager that controls the aggressiveness at runtime. Liu and Solihin [2011] propose an analytical model to study the interaction of hardware prefetching and bandwidth partitioning on a multicore system.…”
Section: Caffeine On Ghb Prefetchermentioning
confidence: 99%