Microprocessor execution speeds are improving at a rate of 50%-80% per year while DRAM access times are improving at a much lower rate of 5%-10% per gear. Computer systems are rapidly approaching the point at which overall system performance is determined not by the speed of the CPU but by the memory system speed. W e present a high performance memory system architecture that overcomes the growing speed disparity between high performance microprocessors and current generation D R A M S . A novel prediction and prefetching technique is combined with a distributed cache architecture to build a high performance memory system. W e use a table based prediction scheme with a prediction cache to prefetch data from the on-chip D R A M array to a n on-chip SRAMprefetch buffer. B y prefetching data we are able t o hide the large latency associated with DRAM access and cycle times.Our experiments show that with a small ('32 K B ) prediction cache we can get a n effective m a i n memory access time that is close t o the access time of larger secondary caches.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.