Proceedings International Parallel and Distributed Processing Symposium
DOI: 10.1109/ipdps.2003.1213088
|View full text |Cite
|
Sign up to set email alerts
|

Miss penalty reduction using bundled capacity prefetching in multiprocessors

Abstract: Abstract

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
8
0

Publication Types

Select...
3
1
1

Relationship

2
3

Authors

Journals

citations
Cited by 5 publications
(8 citation statements)
references
References 26 publications
0
8
0
Order By: Relevance
“…However, the coherence and data traffic on the interconnect increase heavily compared to a non-prefetching protocol. We show that by using the bundling technique, previously published in [28], the coherence traffic can be kept under control.…”
Section: Introductionmentioning
confidence: 92%
See 2 more Smart Citations
“…However, the coherence and data traffic on the interconnect increase heavily compared to a non-prefetching protocol. We show that by using the bundling technique, previously published in [28], the coherence traffic can be kept under control.…”
Section: Introductionmentioning
confidence: 92%
“…The snoop lookups can be largely reduced in sequential prefetching using the bundling technique presented in a previous publication [28]. Bundling lumps the original read request together with the prefetch requests to the consecutive addresses.…”
Section: Sequential Hardware Prefetchingmentioning
confidence: 99%
See 1 more Smart Citation
“…Unfortunately, enlarging the cache line size is not as efficient in multiprocessors as in uniprocessors since it can lead to a large amount of false sharing and an increase in data traffic. The influence of cache line size on cache miss rate and data traffic has been studied by several authors [9], [11], [13], [26], [27]. To avoid false sharing and at the same time take advantage of spatial locality, sequential prefetching fetches a number of cache lines having consecutive addresses on a read cache miss.…”
Section: Background: Multiprocessor Prefetchingmentioning
confidence: 99%
“…None of these protocols have used piggybacking as a method of reducing the address snoops for prefetches. A simple form of bundling applied only to read transactions has previously been studied together with the capacity prefetching technique [26]. However, no evaluation of the possible performance gains of read bundling has previously been performed.…”
Section: Reduction Of Address Snoops Through Bundlingmentioning
confidence: 99%