Proceedings of the ACM SIGGRAPH/EUROGRAPHICS Workshop on Graphics Hardware 1998
DOI: 10.1145/285305.285321
|View full text |Cite
|
Sign up to set email alerts
|

Prefetching in a texture cache architecture

Abstract: Texture mapping has become so ubiquitous in real-time graphics hardware that many systems are able to perform filtered texturing without any penalty in fill rate. The computation rates available in hardware have been outpacing the memory access rates, and texture systems are becoming constrained by memory bandwidth and latency. Caching in conjunction with prefetching can be used to alleviate this problem.In this paper, WC introduce a prefetching texture cache architecture designed to take advantage of the acce… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
35
0

Year Published

2002
2002
2023
2023

Publication Types

Select...
4
4
1

Relationship

0
9

Authors

Journals

citations
Cited by 79 publications
(35 citation statements)
references
References 11 publications
0
35
0
Order By: Relevance
“…Finally, it can be combined with a hardware rendering pipeline, functioning in that context like a mipmap texture cache. We believe that many observations made in previous texture caching papers [7,2,10] could be applied in our algorithm, which has similar (but not identical) characteristics. For example, both texture caching and our algorithm perform better when the access pattern exhibits good coherence.…”
Section: Discussionmentioning
confidence: 87%
“…Finally, it can be combined with a hardware rendering pipeline, functioning in that context like a mipmap texture cache. We believe that many observations made in previous texture caching papers [7,2,10] could be applied in our algorithm, which has similar (but not identical) characteristics. For example, both texture caching and our algorithm perform better when the access pattern exhibits good coherence.…”
Section: Discussionmentioning
confidence: 87%
“…We read from three mipmap levels whereas trilinear interpolation reads from only two levels and it is likely that GPUs optimize for trilinear accesses by using two caches for alternate mipmap levels [Igehy et al 1998], and that reading from three levels causes cache conflicts. GPUs are also likely to optimize for the 2×2 quads of texels accessed by a trilinear interpolant, whereas our fetches are less regular.…”
Section: Resultsmentioning
confidence: 99%
“…Once the pruning is done, subsequent arcs can be prefetched while previously fetched arcs are being processed in the next pipeline stages. Figure 6 shows our prefetching architecture for the Arc cache, which is inspired by the design of texture caches for GPUs [25]. Texture fetching exhibits similar characteristics to the arcs fetching, as all the texture addresses can also be computed much in advance from the time the data is required.…”
Section: A Hiding Memory Latencymentioning
confidence: 99%