1995
DOI: 10.1109/72.363476
|View full text |Cite
|
Sign up to set email alerts
|

Implementing regularly structured neural networks on the DREAM machine

Abstract: High-throughput implementations of neural network models are required to transfer the technology from small prototype research problems into large-scale "real-world" applications. The flexibility of these implementations in accommodating for modifications to the neural network computation and structure is of paramount importance. The performance of many implementation methods today is greatly dependent on the density and the interconnection structure of the neural network model being implemented. A principal c… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

1999
1999
2011
2011

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 22 publications
(5 citation statements)
references
References 32 publications
0
5
0
Order By: Relevance
“…The second variation is based on the fact that the computations in a neural network are basically matrix products [16][17][18][19]. The advantage of this approach is the amount of data communicated between processors is moderate and evenly distributed, although in a multilayer perceptron the synaptic matrix is lower triangular.…”
Section: Our Proposed Pipelinementioning
confidence: 99%
“…The second variation is based on the fact that the computations in a neural network are basically matrix products [16][17][18][19]. The advantage of this approach is the amount of data communicated between processors is moderate and evenly distributed, although in a multilayer perceptron the synaptic matrix is lower triangular.…”
Section: Our Proposed Pipelinementioning
confidence: 99%
“…Examples can be found in [7][8][9][10][11][12][13][14][15][16][17]. In this paper, an implementation of HNN into an SRAM-based FPGA is shown.…”
Section: Advances In Artificial Neural Systemsmentioning
confidence: 99%
“…From the simulation results presented in the previous section it can be clearly shown that a HNN of four nodes will only A comparison with previous work relating generalpurpose parallel machines shows the performance superiority of our implementation over known implementations on planar architectures. Most known implementations [11,13,14,16,17] require O(N) time complexity, whereas the proposed implementation requires O(log N) time complexity. Implementations on nonplanar architectures, such as hypercube, show a minor performance gain over our design at the cost of much more complex interconnection network [10,15].…”
Section: Performance and Comparison With Previous Workmentioning
confidence: 99%
“…Thus, the true advantage of using Hopfield-type neural networks to solve difficult optimization problems relates to speed considerations. Due to their inherently parallel strucManuscript received August 23, 1999 ture and simple computational requirements, neural network techniques are especially suitable for direct hardware implementation, using analog or digital integrated circuits [22], or parallel simulations [23]. Moreover, the Hopfield neural networks have very natural implementations in optics [24].…”
Section: Introductionmentioning
confidence: 99%