International Joint Conference on Neural Networks 1989
DOI: 10.1109/ijcnn.1989.118697
|View full text |Cite
|
Sign up to set email alerts
|

Design of parallel hardware neural network systems from custom analog VLSI 'building block' chips

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
15
0

Year Published

1992
1992
2023
2023

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 66 publications
(15 citation statements)
references
References 9 publications
0
15
0
Order By: Relevance
“…Others have shown that nonstochastic learning algorithms can be successfully applied with only nine bits of precision [53], and our simulations have shown that this application requires only seven or eight bits of precision (see Section VI-B). Therefore, for water vapor retrieval a neural network could be implemented with reduced complexity VLSI circuits (which are smaller and consume less power then circuits featuring 32 bits of floating-point precision), and a network could then be placed on a satellite or ground station.…”
Section: E Rbf Network Versus Cascade-correlation Networkmentioning
confidence: 88%
“…Others have shown that nonstochastic learning algorithms can be successfully applied with only nine bits of precision [53], and our simulations have shown that this application requires only seven or eight bits of precision (see Section VI-B). Therefore, for water vapor retrieval a neural network could be implemented with reduced complexity VLSI circuits (which are smaller and consume less power then circuits featuring 32 bits of floating-point precision), and a network could then be placed on a satellite or ground station.…”
Section: E Rbf Network Versus Cascade-correlation Networkmentioning
confidence: 88%
“…VLSI implementations of ANNs usually reflect these equations, as being composed of a matrix vector multiplier followed by a non-linear activation function block. Often the multiplier and the activation function are placed on different chips to ensure cascadability and thus the ability of this chip set to implement arbitrary architectures [8]. For ease of implementation, the multiplier has usually current outputs and voltage inputs, as rows and columns in this case are added to w simply by the parallel connection of transconducting multiplying cells.…”
Section: Hardware Efficient Multiplying Dac Synapsesmentioning
confidence: 99%
“…Among other things flexible topology [3], [12], [ l l ] , differential capacitive weights storage [4], [lo], [13], inner product multipliers [l], [2], [lo] and hyperbolic tangent activation functions [9], [ 101 have been considered. In this paper, we have combined and perturbated the existing solutions with our own work to obtain an efficient general purpose ANN in analog V L S I .…”
Section: Introduction Everal Approaches On Artificial Neural Netwomentioning
confidence: 99%
“…Thus a Kardware ANN could consist of a matrix-vector multiplier (a synapse chip) followed by a squashing function vector (a neuron chip); it turns out that this splitting of the synapses and the neurons on separate chips provides easy expandability for fully parallel systems [3], [7], (121. In this paper, we present such an analog CMOS chip set.…”
Section: Introduction Everal Approaches On Artificial Neural Netwomentioning
confidence: 99%