Proceedings of the IEEE 1991 Custom Integrated Circuits Conference
DOI: 10.1109/cicc.1991.164044
|View full text |Cite
|
Sign up to set email alerts
|

GANGLION-a fast hardware implementation of a connectionist classifier

Abstract: This paper describes the implementation of GAN-GLION, a fully interconnected, digital, feed forward connectionist classifier with one hidden layer capable of 4.48 billion interconnections per second. The single card classifier, built using only off the shelf components, relies heavily on field programmable gate arrays.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 20 publications
(4 citation statements)
references
References 2 publications
0
4
0
Order By: Relevance
“…As mentioned previously, a fully-or semi-parallel VLSI implementation of DBN in binary form requires a lot of hardware resources. Therefore, many works target FPGAs [30]- [35], but none manage to fit a fully-parallel deep neural network architecture in a single FPGA board. Recently, a fully pipelined FPGA architecture of a factored RBM (fRBM) was proposed in [9], which could implement a single layer neural network consisting of 4096 nodes using virtualization technique, i.e., time multiplex sharing technique, on a Virtex-6 FPGA board.…”
Section: B Fpga Implementationmentioning
confidence: 99%
“…As mentioned previously, a fully-or semi-parallel VLSI implementation of DBN in binary form requires a lot of hardware resources. Therefore, many works target FPGAs [30]- [35], but none manage to fit a fully-parallel deep neural network architecture in a single FPGA board. Recently, a fully pipelined FPGA architecture of a factored RBM (fRBM) was proposed in [9], which could implement a single layer neural network consisting of 4096 nodes using virtualization technique, i.e., time multiplex sharing technique, on a Virtex-6 FPGA board.…”
Section: B Fpga Implementationmentioning
confidence: 99%
“…This approach begins by training a neural network to model the circuit under design. As described in Section II-A, the weights of the neural network are adjusted at this stage to minimize its error function given by (3). The solution searching is then performed by applying a modified backpropagation learning rule to the trained network.…”
Section: Design By a Neural Network Learning Processmentioning
confidence: 99%
“…A software implementation of a neural network is sluggish since the neurons have to be updated sequentially. Only hardware implementations, which explore the parallelism of neural network computing, can fully realize its potential [2], [3]. In prior efforts at applying neural network models in microwave design, while the neural network training and modeling operations could be accelerated by a neural network processor, the solution-searching optimization (e.g., a gradient method) remained as a software routine external to the neural network model.…”
Section: Introductionmentioning
confidence: 99%
“…Applications such as specialized numerical applications (e.g. encryption), DSP and image processing, cellular automata and other systolic arrays and neural networks have been demonstrated on a number of FPGA machines [3][4][5][6][7], each with many FPGAs connected to banks of memory, with speedups of 2 to 3 orders of magnitude over software quoted. However, the programming of even trivial tasks on these machines is complex and the development of support tools is an important ongoing research subject.…”
Section: Introductionmentioning
confidence: 99%