2005 International Conference on Computer Design
DOI: 10.1109/iccd.2005.112
|View full text |Cite
|
Sign up to set email alerts
|

Utilizing horizontal and vertical parallelism with a no-instruction-set compiler for custom datapaths

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
30
0

Publication Types

Select...
3
3
1

Relationship

0
7

Authors

Journals

citations
Cited by 32 publications
(30 citation statements)
references
References 8 publications
0
30
0
Order By: Relevance
“…CGRAs consist of a grid of functional units and register files Programs are mapped onto the grid by the compiler, which has a great deal of fl xibility in scheduling. Another architecture that gives the compiler direct control of the micro-architecture is the No Instruction Set Computer (NISC) [8]. Unlike other architectures, there is no fi ed ISA that bridges the compiler with the hardware.…”
Section: Related Workmentioning
confidence: 99%
“…CGRAs consist of a grid of functional units and register files Programs are mapped onto the grid by the compiler, which has a great deal of fl xibility in scheduling. Another architecture that gives the compiler direct control of the micro-architecture is the No Instruction Set Computer (NISC) [8]. Unlike other architectures, there is no fi ed ISA that bridges the compiler with the hardware.…”
Section: Related Workmentioning
confidence: 99%
“…Multiple pipelined computations, constrained by the horizontal's parallelism degree, can be initiated and flowed through the FPS units in a circular manner. By this way, cooperation between horizontal and vertical parallelism is enabled at a finer granularity in comparison to the NISC approach [17], where vertical parallelism strictly follows only the conventional flow inside each pipelined functional unit.…”
Section: Micro-architectural Abstractions Of the Reconfigurable Archimentioning
confidence: 99%
“…Compared to reconfigurable architectures with aggressive chaining opportunities [16], gains of 28.4% in execution time and 53.9% in area complexity are reported. Assuming reconfigurable datapaths with horizontal/vertical parallelism and data forwarding features [17], our methodology delivers an average improvement up to 70% and 30.9% in latency and area, respectively. The Area-Delay product and area utilization metrics further prove the efficiency of our approach.…”
Section: Introductionmentioning
confidence: 99%
See 2 more Smart Citations