Proceedings of the 2001 International Symposium on Low Power Electronics and Design - ISLPED '01 2001
DOI: 10.1145/383082.383088
|View full text |Cite
|
Sign up to set email alerts
|

Instruction flow-based front-end throttling for power-aware high-performance processors

Abstract: We present a number of power-aware instruction front-end (fetch/decode) throttling methods for high-performance dynamically-scheduled superscalar processors. Our methods reduce power dissipation by selectively turning on and off instruction fetch and decode. Moreover, they have a negligible impact on performance as they deliver instructions just in time for exploiting the available parallelism. Previously proposed front-end throttling methods rely on branch prediction confidence estimation. We introduce a new … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

3
49
0

Year Published

2003
2003
2017
2017

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 59 publications
(54 citation statements)
references
References 7 publications
3
49
0
Order By: Relevance
“…Using these models, two types of architectural Dynamic Thermal Management (DTM) techniques have been proposed to prevent the chip from reaching critical temperature levels. Temporal techniques slow down heat accumulation either at fine granularity through fetch toggling [10], decode throttling [13], frequency and voltage scaling [6], or at coarse granularity through periodically stopping the computation to induce cooling [14]. Obviously, slowing down or stopping the entire computation engenders significant performance degradation.…”
Section: Related Workmentioning
confidence: 99%
“…Using these models, two types of architectural Dynamic Thermal Management (DTM) techniques have been proposed to prevent the chip from reaching critical temperature levels. Temporal techniques slow down heat accumulation either at fine granularity through fetch toggling [10], decode throttling [13], frequency and voltage scaling [6], or at coarse granularity through periodically stopping the computation to induce cooling [14]. Obviously, slowing down or stopping the entire computation engenders significant performance degradation.…”
Section: Related Workmentioning
confidence: 99%
“…As noted in prior work [4,17,18,23], instruction delivery power is higher than necessary because of the performance focused design strategy at the high end. In such designs, the front-end fetch mechanism provides instructions using the peak architected bandwidth, as early as possible, by making use of sophisticated branch prediction algorithms.…”
Section: Introductionmentioning
confidence: 93%
“…Other methods of fetch gating [4,17] attempt to reduce idle energy by making the fetch mechanism more demanddriven; that is, instruction fetch is gated when the downstream utilization is high or the flow rate mismatch (between decode and commit) is high. In this context of flow rate matching, the issue queue plays a central role for two reasons.…”
Section: Introductionmentioning
confidence: 99%
“…In case of low confidence, they gate the pipeline by stalling instruction fetch. Baniasadi and Moshovos [2] extend this approach to throttle the execution of instruction flow in wide-issue super-scalar processors to achieve energy reduction. They use instruction flow information such as rate of instructions passing through stages to determine whether to stall stages.…”
Section: Clock and Fetch Gatingmentioning
confidence: 99%