Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)
DOI: 10.1109/icnn.1994.374440
|View full text |Cite
|
Sign up to set email alerts
|

70 input, 20 nanosecond pattern classifier

Abstract: Abstmcf -A CMOS neural network integrated circuit is discussed, which was designed for very high speed applications. This full-custom, mixed analog-digital chip implements a fully connected feedforward neural network with 70 inputs, 6 hidden layer neurons and one output neuron. The neurons perform inner product operation and have sigmoid-like activation function. The 70 network inputs and the neural signal processing are analog, the synaptic weights are digitally programmable with 5 bit (4 bits + sign) precisi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
2
0

Publication Types

Select...
3
3

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(2 citation statements)
references
References 4 publications
0
2
0
Order By: Relevance
“…BMP architecture with two layers of connection weights is same as that implemented by [50] except that the latter uses integer input values, 5-bit synaptic weights, one output neuron, and sigmoid-like activation function whereas BMP uses binary input values, synaptic weights from multiple output neurons, and binary hardlimiter as activation function. Hence the computation delay of BMP module implemented using current CMOS technology can be expected to be at best of the order of 20 ns.…”
Section: A Performance Of Hardware Realization Of the Nnlr Parsermentioning
confidence: 99%
“…BMP architecture with two layers of connection weights is same as that implemented by [50] except that the latter uses integer input values, 5-bit synaptic weights, one output neuron, and sigmoid-like activation function whereas BMP uses binary input values, synaptic weights from multiple output neurons, and binary hardlimiter as activation function. Hence the computation delay of BMP module implemented using current CMOS technology can be expected to be at best of the order of 20 ns.…”
Section: A Performance Of Hardware Realization Of the Nnlr Parsermentioning
confidence: 99%
“…reports a measured propagation delay of 104ns in a digital circuit with each synapse containing an 8-bit memory, an 8-bit subtractor and an 8-bit adder [50]. reports throughput at the rate of lOMHz (or equivalently, delay of 100ns) in a Hamming Net pattern classifier using analog circuits [106]. describes a hybrid analog-digital design with 5-bit (4 bits -f-sign) binary synapse weight values and current-summing circuits that is used to realize a 2-layer feed-forward ANN with a network computation delay of less than 20ns.The Ist-layer and 2nd-layer subnetworks of the proposed neural architecture for database query processing are very similar to the Ist-layer subnetwork of a Hamming Net respectively, and the neural architecture with 2 connection layers in the proposed ANN is exactly same as that implemented by[106] except[106] uses discretized inputs, 5-bit synaptic weights, and 1} and binary hardlimiter as activation function.…”
mentioning
confidence: 99%