2002
DOI: 10.1007/3-540-47847-7_14
|View full text |Cite
|
Sign up to set email alerts
|

High Performance and Energy Efficient Serial Prefetch Architecture

Abstract: Energy efficient architecture research has flourished recently, in an attempt to address packaging and cooling concerns of current microprocessor designs, as well as battery life for mobile computers. Moreover, architects have become increasingly concerned with the complexity of their designs in the face of scalability, verification, and manufacturing concerns.In this paper, we propose and evaluate a high performance, energy and complexity efficient front-end prefetch architecture. This design, called Serial P… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2003
2003
2010
2010

Publication Types

Select...
4
3

Relationship

1
6

Authors

Journals

citations
Cited by 9 publications
(6 citation statements)
references
References 17 publications
(17 reference statements)
0
6
0
Order By: Relevance
“…To lower the energy consumption and increase thermal efficiency, Reinman et al (2002) suggested small size cache for the router memory. However, because of the limiting size of the cache, the proposed technique is not scalable.…”
Section: Achieving Thermal Efficiency In Nocmentioning
confidence: 99%
“…To lower the energy consumption and increase thermal efficiency, Reinman et al (2002) suggested small size cache for the router memory. However, because of the limiting size of the cache, the proposed technique is not scalable.…”
Section: Achieving Thermal Efficiency In Nocmentioning
confidence: 99%
“…Using segmented word lines [Ghose and Kamble 1999] for the data portion of the instruction cache, we can fetch the necessary words while activating only the necessary sense-amplifiers, in each case. As front-end decoupling tolerates higher instruction-cache latency without loss in speculation accuracy, we can first access the tags for a set-associative instruction cache, and in subsequent cycles, access the data only in the way that hits [Reinman et al 2002]. Furthermore, we can save decoding and tag access energy in the instruction cache by merging instruction-cache accesses for sequential blocks in the BBQ that hit in the same instruction cache line.…”
Section: Front-end Architecturementioning
confidence: 99%
“…In recent work [16], we explored selectively accessing cache ways using a decoupled MC cache to create an energy efficient instruction prefetch architecture. In this submission, we expand on that research by (1) examining the use of a serial cache design just for instruction fetch, and (2) compare our serial fetch design to way predicted fetch architectures.…”
Section: Serial Fetch Architecturementioning
confidence: 99%