Proceedings of the ACM SIGMETRICS Joint International Conference on Measurement and Modeling of Computer Systems 2011
DOI: 10.1145/1993744.1993749
View full text
|
Sign up to set email alerts
|

Abstract: Modern high performance microprocessors widely employ hardware prefetching technique to hide long memory access latency. While very useful, hardware prefetching tends to aggravate the bandwidth wall, a problem where system performance is increasingly limited by the availability of the off-chip pin bandwidth in Chip Multi-Processors (CMPs).In this paper, we propose an analytical model-based study to investigate how hardware prefetching and memory bandwidth partitioning impact CMP system performance and how they… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
10
0

Year Published

2015
2015
2020
2020

Publication Types

Select...
4

Relationship

0
4

Authors

Journals

citations
Cited by 32 publications
(10 citation statements)
references
References 38 publications
(63 reference statements)
0
10
0
Order By: Relevance
“…A major limitation of cache bypassing technique is, it could prove to be partially benefited as it already pollutes the cache, by replacing data once. Liu et al (2008), and Liu and Solihin (2011) aim to predict deadblocks based on bursts of accesses to a cache block rather than individual accesses. The work investigates references made to a cache line when it moves from MRU position to LRU position to predict deadblocks.…”
Section: Deadblock Predictions To Reduce Cache Contentionmentioning
confidence: 99%
“…• Prefetching and bandwidth partitioning: According to Moore's law, the quantity of transistors is doubling rapidly as compared to bandwidth which hardly grows 10 to 15%, it becomes challenging for bandwidth to handle the pressure due to increasing cores and leads to the âŁ˜bandwidth wall' crises. Exploiting the combined effect of prefetching and bandwidth partitioning Liu and Solihin (2011) propose a scheme that investigates prefetching conditions and describes a bandwidth partitioning policy coupled with the effects due to prefetching.…”
Section: Studying Impact Of Prefetching With Other Shared Resourcesmentioning
confidence: 99%
“…Regarding the NoC, some interesting approaches [21,22] implement virtual channels and dynamically adjust the priority between regular and prefetch requests coming from multiple cores. With respect to memory controller policies, recent proposals [23,24,25] have also focused on multicores. These policies take into account the prefetcher performance to dynamically select the priority of both regular and prefetch requests.…”
Section: Related Workmentioning
confidence: 99%
“…characterize cache pollution in the real system and propose a prefetch manager that controls the aggressiveness at runtime. Liu and Solihin [2011] propose an analytical model to study the interaction of hardware prefetching and bandwidth partitioning on a multicore system.…”
Section: Caffeine On Ghb Prefetchermentioning
confidence: 99%
“…An aggressive hardware prefetcher may completely hide the latency of off-chip memory accesses. However, it may cause severe interference at the shared resources (last level cache and memory bandwidth) of a multi-core system [Ebrahimi et al 2009[Ebrahimi et al , 2011Wu et al 2011;Seshadri et al 2015;Panda and Balachandran 2015;Jimenez et al 2015;Panda 2016;Lee et al 2008;Liu and Solihin 2011;Bitirgen et al 2008]. To manage prefetching in multi-core systems, prior studies [Srinath et al 2007;Ebrahimi et al 2009Ebrahimi et al , 2011Panda and Balachandran 2015;Panda 2016] have been proposed to dynamically control (also known as throttling) the prefetcher aggressiveness by adjusting the prefetcher-configuration at runtime.…”
Section: Introductionmentioning
confidence: 99%