1982
DOI: 10.1109/tc.1982.1676097
|View full text |Cite
|
Sign up to set email alerts
|

A Data Flow Computer Architecture with Program and Token Memories

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
5
0

Year Published

1985
1985
2020
2020

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 13 publications
(5 citation statements)
references
References 6 publications
0
5
0
Order By: Relevance
“…In view of data flow parallelism, Zehendner et al [24] proposed three levels to be exploited: task, block and instruction, and subinstruction. Despite tireless efforts by researchers to innovate better and faster performance data flow computers [2,3,4,5], major issues on sensitive operations such as language and data dependencies, token matching, resource management, and well-formed parallelism [6] still persisted.…”
Section: Function2mentioning
confidence: 99%
“…In view of data flow parallelism, Zehendner et al [24] proposed three levels to be exploited: task, block and instruction, and subinstruction. Despite tireless efforts by researchers to innovate better and faster performance data flow computers [2,3,4,5], major issues on sensitive operations such as language and data dependencies, token matching, resource management, and well-formed parallelism [6] still persisted.…”
Section: Function2mentioning
confidence: 99%
“…This network partitioning, however, requires loading graphs in a way that balances workload among PE's to Avoid instruction-execution con-[icts. Sows and MurAtA have proposed partitioning the common memory into program memory and token memory to reduce the size and complexity of the switching networks [20]. Another Approach is to eliminate the arbitration network entirely by Attachins A dedicated PE to each CB.…”
mentioning
confidence: 99%
“…It does so by accessing [SM82] template "DOT" (Figure 2). "Apply" then equates the template's local name [Y with the value of its second input operand, the value "A", and similarly equates w to be 'Z."…”
Section: The Execution Modelmentioning
confidence: 99%
“…In this paper we will distinguish functional languages, which are generally recursive in proyramming style (being rooted in the lambda calculus [Bar841 [Dar85]), from dataflow languages [Ack82] which, having their underlying foundations in the Petri net [Rei851 have cyclic program graphs [Den751 [GKWSS] and therefore an iterative programming style [Ack82]) as wcll as other sequential biases, such as a preoccupation with time-sequential "streams" [CDP83] IKro831 Wit891. The dataflow approach has dominated recent research into datadriven parallel machine architectures [GKWSS] [SM82] [SY+89] fTp+91], while advanced architectures for functional languages have generally been based on demanddriven ("lazy," that is, normal-order evaluation) implementations [Cur773. But the data-driven functional programming style has the potential of exposing a tremendous amount of inherent parallelism in a broad class of vector and matrix oriented computations [Veg84], something that has only been seriously investigated for the dataflow paradigm [AA821 [CA88].…”
Section: Introductionmentioning
confidence: 99%