2010
DOI: 10.2200/s00309ed1v01y201011cac012
|View full text |Cite
|
Sign up to set email alerts
|

Processor Microarchitecture: An Implementation Perspective

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
15
0

Year Published

2011
2011
2021
2021

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 35 publications
(18 citation statements)
references
References 30 publications
0
15
0
Order By: Relevance
“…The CG-OoO processor introduces the Skipahead issue model. In OoO and CG-OoO, in-flight instructions are maintained in queues that are partly RAM and partly CAM tables (González et al 2010). For the InO model, instructions are held in a small FIFO buffer.…”
Section: Instruction Schedulingmentioning
confidence: 99%
“…The CG-OoO processor introduces the Skipahead issue model. In OoO and CG-OoO, in-flight instructions are maintained in queues that are partly RAM and partly CAM tables (González et al 2010). For the InO model, instructions are held in a small FIFO buffer.…”
Section: Instruction Schedulingmentioning
confidence: 99%
“…This choice of machines corresponds to configurations that are currently most often used by researchers for evolutionary experiments, either as standalone computers, or as end-nodes in a hybrid distributed evolutionary architecture. This also allows to study in detail the most interesting range, from 4-core to 16-core machines, with and without hyper-threading technology [8,29], and investigate relationships between the number of cores, threads, and the resulting performance.…”
Section: Multithreading Performancementioning
confidence: 99%
“…2: a linear slope for 1-4 threads and a nearly constant performance for 4+ threads. Configurations 4/8-W7, 4/8-W8, and 16/32-L feature hyper-threading technology (Intel's simultaneous multithreading implementation, SMT [8,29]) and can execute two simultaneous threads on each core, with slightly less performance compared to one thread per core, as additional threads share the same CPU hardware resources. This technology yields two nearly linear slopes: one for increasing the number of threads up to the number of physical cores, and another, less steep slope, up to twice the number of cores.…”
Section: Simulation and Evolutionmentioning
confidence: 99%
“…34 We denote the consumption per tag array access involved as L.r.tag and the consumption per data array reading as L.r.line. The sum of the two prior values is referred to L.r and the consumption involved in accessing the cache to perform a writing as L.w.…”
Section: Energy Modelmentioning
confidence: 99%